Could Omar Mateen’s killing spree in Orlando, Florida on June 12 have been prevented if Facebook had employed a computer algorithm to flag potential terrorists?

The U.S. Senate’s Homeland Security Committee, for one, may be interested in exploring this tantalizing, though exceptionally fraught, question.

On June 15, the chairman of the committee, Senator Ron Johnson wrote to Facebook Chairman Mark Zuckerberg and requested “to arrange a briefing with Committee staff on the information available to Facebook prior to and during this terrorist attack.”

Johnson noted that five Facebook accounts were associated with Mateen and that he had used Facebook to conduct searches on the terrorist group Daesh, as well as an earlier Daesh-inspired attack in San Bernardino, California, in the period before June 12.

According to several news reports, Mateen also stalked at least one romantic interest via Facebook and in his last post announced: “In the next few days, you will see attacks from [Daesh] in the USA.”

On June 29, Facebook responded to the Committee’s request, drawing praise from Press Secretary Brittni Palke who tells Newsweek Middle East that Facebook has been and continues to be “cooperat[ive] with the law-enforcement investigation and the committee’s oversight inquiry.”

One top committee aide, who requested anonymity due to the sensitivity of the investigation, and the fact that the proceedings were closed to the media and the public, explained that the Senate committee is particularly interested in what actions could have been taken or should be taken by Facebook and others, to develop more effective policies against “homegrown extremism.”

For some experts in the field, the American government’s enduring interest in the predictive capabilities of technology—especially when paired with artificial intelligence—raises a series of concerns.

After all, getting poorly targeted advertising from Google or Facebook might be annoying, but having police break down your door in the middle of the night because of a “false positive,” a hacking breach that alters stored data or plain old identity theft could be life-changing.

More than 15 million Americans are caught up each year in identity theft while in the Middle East, according to a recent survey from the French security firm Gemalto, 60 percent of Information Technology (IT) professionals say unauthorized users can access their networks and 36 percent believe unauthorized users have access to their entire networks once a breach has occurred.

“The situation is much worse than most people know,” argues prominent hacker and author Josh Klein. “The ideal time for discussing governments’ involvement in big data and analytics was probably closer to 10-15 years ago,” he says.

Klein notes that in just the last two years, companies and states around the world have collected almost as much data as has been produced in all of human history.

“Yes, we can probably find some lone wolf terrorists, for example, by applying technology to all of this data. The system is becoming vastly more effective by the day. But without safeguards, we are inevitably going to have some very serious problems. Look at identity theft,” he warns.

“It now takes you 15 years to recover your own identity. I am not guessing that the government will do better than VISA,” he adds.

Algorithms, according to experts, are increasingly making crucial decisions for all of us, such as who gets a job or a bank loan and who gets flagged as a security threat at an airport. One particularly controversial use concerns predictive sentencing that employs computer code to best match the likelihood of future criminality with a particular length of incarceration or a specific type of probation.

According to an investigation in May of this year by the public interest non-profit ProPublica, some of the software being used by courts around America are actually reproducing the same biases that exist in the country’s criminal justice system, especially when it comes to race.

“I think we are at an inflection point,” says Pedro Domingos, a professor of computer science at the University of Washington and author of “The Master Algorithm.”

“We as a society need to make some decisions very soon about where this is all going and how we want these technologies to be used.”

One “terrible approach,” as the professor puts it, would be to ban various emerging technologies. But then, he argues, “a terrorist who could have been caught blows up something, so that doesn’t work.”

The other option is what we have right now, which seems like a web access that is “free-for-all with very little testing for effectiveness. This could move us [going forward] to a police state with false positives.”

To underline the concern, Domingos, as well as privacy advocates and ethicists, point to the announcement just last month that the Israeli company Faception had secured a U.S. Department of Homeland Security contract, that would help the sprawling agency use facial recognition technology to identify potential terrorists or any other people who might decide to violate the law.

According to the company, “being able to utilize facial images
 to answer the questions: Who is this person? What are his personality traits? What motivates him?” could revolutionize how companies, organizations and even robots understand people.

“This would dramatically improve public safety, communications, decision-making, and experiences,” claims Faception.

In addition to Senator Johnson’s investigation, Arizona Senator John McCain is also now pressing forward on the technology front.

Just days after the Orlando shooting, the Republican representative and former U.S. presidential candidate filed an amendment to the Commerce, Justice and Science Appropriations Act that would give the FBI easy access to individuals’ browsing history and email data without having to obtain a warrant.

“In the wake of the tragic massacre
 it is important our law enforcement have the tools they need to conduct counterterrorism investigations and track ‘lone wolves’,” he told the U.S. Senate.

Coincidentally, McCain’s proposal—which provides no concrete guidelines for exactly how vast new amounts of warrantless data might be tested, coded and overseen—appeared the same week that Neil Johnson, a physicist at the University of Miami, published the results of his study that attempts to model Daesh and its online supporters.

“Our findings suggest that instead of having to analyze the online activities of many millions of individual potential actors worldwide, interested parties can shift their focus to aggregates, of which there will typically be only a few hundred.

“Our approach,” Johnson and his team submit, “combining automated data-mining with subject-matter expert analysis and generative model-building drawn from the physical and mathematical sciences, goes beyond existing approaches to mining such online data.”

Despite claims like this, not everyone in the U.S. government is barreling down the predictive technology track.

When it comes to the increasing use of Virtual Reality (VR) by commercial companies—an effort that Facebook’s Zuckerberg says will “capture [the] kind of raw emotion or thought that we have” —Minnesota Senator Al Franken, the ranking member of the Senate sub-committee on Privacy, Technology, and the Law, is pushing back.

In an April letter to Facebook, Franken raised a series of privacy concerns over its VR technology as well as questions about how VR data is used to guide individuals or groups.

At this point, however, the senator’s questions have remained just that, although Franken has hinted that legislation may be necessary in the future.

Washington University’s Domingos hopes that “in some ways,” even with all of the Hollywood-esque statements about the limitless possibilities of technology or the impending robot apocalypse, “we can actually have a much more objective and productive discussion of what the algorithms should and should not do when it comes to these technologies.”

Unlike people, where one has to guess what’s going on in their mind, in the case of an algorithm the bias is explicit in the code itself and can be altered.

“So we need to be very careful about how to regulate,” he adds, “but we can, as citizens and governments, decide where the threshold is for taking action based on a particular technology.”

As the code that runs so much of our lives becomes ever more powerful, deciding where exactly these thresholds might lie will very likely become fodder for a range of political fights around the world, sooner rather than later.

Of course, as the hacker Klein notes, it is also quite likely that the technologies themselves are already playing a prominent role in the efforts to stop the next Omar Mateen.

0 Shares:
You May Also Like