This week in data: spying, lying and being human.
Share and Enjoy
NO this isn’t everything that happened in data, not even close. When you start paying attention, you see that data breaches are happening daily, internet outages, cables cut, privacy bills passing, privacy bills getting killed or weakened by the very industry taking the data, internet as a human right, tremendous expansion of online education and blended learning, lawsuits, who’s spying on who…it’s everywhere and it’s enough to make your head spin. So, today, I write on just a few of the data stories of this week. You will want to read- They are big stories. Click on the links to read the full stories. Be sure to read the last one from Microsoft on being human.
[T]he chilly relationship between government bodies and private tech businesses is growing frostier by the day. In the latest development, it has emerged that Twitter requested one of its key B2B partners, Dataminr — a service that offers advanced social media analytics and early detection of major events like terrorist attacks or natural disasters — stop providing U.S. intelligence agencies with their tools and content.
But Dataminr isn’t ending its relationship with the government altogether: Dataminir still counts In-Q-Tel, the non-profit investment arm of the CIA, as an investor. Dataminr has taken investment from Twitter, too, highlighting some of the conflicts that remain as tech companies fight for more transparency and autonomy from government control. Interestingly, the agencies who are at the center of today’s news were using Dataminr in an unpaid pilot, TechCrunch understands. That pilot, which was coming to an end, could not be continued as a paid deal because of pre-existing Twitter policies that forbid selling data for use in government surveillance. The news of Dataminr cutting off intelligence groups was first reported by the WSJ, and we have confirmed the details directly with sources.
What exactly is Dataminr? The company uses Twitter firehose data — the unfiltered, full stream of Tweets from Twitter’s 300m+ users — along with other primary sources, to surface signals for big events as they are happening.
The company has a number of customers in the finance industry, and it works with government organizations like the Department of Homeland Security, which has a $255,000 contract with the startup. –TechCrunch
In Hearing on Internet Surveillance, Nobody Knows How Many Americans Impacted in Data Collection
The Senate Judiciary Committee held an open hearing today on the FISA Amendments Act, the law that ostensibly authorizes the digital surveillance of hundreds of millions of people both in the United States and around the world. Section 702 of the law, scheduled to expire next year, is designed to allow U.S. intelligence services to collect signals intelligence on foreign targets related to our national security interests. However—thanks to the leaks of many whistleblowers including Edward Snowden, the work of investigative journalists, and statements by public officials—we now know that the FISA Amendments Act has been used to sweep up data on hundreds of millions of people who have no connection to a terrorist investigation, including countless Americans.
What do we mean by “countless”? As became increasingly clear in the hearing today, the exact number of Americans impacted by this surveillance is unknown. Senator Franken asked the panel of witnesses, “Is it possible for the government to provide an exact count of how many United States persons have been swept up in Section 702 surveillance? And if not the exact count, then what about an estimate?”
Elizabeth Goitein, the Brennan Center director whose articulate and thought-provoking testimony was the highlight of the hearing, noted that at this time an exact number would be difficult to provide. However, she asserted that an estimate should be possible for most, if not all, of the government’s surveillance programs. -EFF
Federal Trade Commission officials are asking questions again about whether Google has abused its dominance in the Internet search market, a sign that the agency may be taking steps to reopen an investigation it closed more than three years ago, according to sources familiar with the discussions.
Senior antitrust officials at the FTC have discussed the matter in recent months with representatives of a major U.S. company that objects to Google’s practices, according to sources with the company. While the inquiry appears to be in the early, information-gathering stage, it signals renewed agency interest in the kind of search case it examined — but ultimately closed without charges — in 2013.
When the FTC ended its earlier Google probe, critics said the agency — which secured a handful of concessions from Google on patents and some business practices — had essentially delivered a slap on the wrist to the search giant, which has a heavy lobbying presence in Washington. Since then, the European Commission has charged Google with anti-competitive behavior for allegedly manipulating search results to favor its own shopping services and using its Android mobile operating system to secure better placement of its apps with smartphone makers.
Critics complain that Google has used its online dominance to treat competitors unfairly — for example, by pushing search results for competing products off its homepage or siphoning valuable content from third-party sources without express permission. The practices, according to critics, undermine the widespread view that Google acts as a neutral gateway to information on the Internet. –Politico
Facebook Sued. Again.
A San Francisco federal judge rejected Facebook’s request to toss a lawsuit alleging its photo-tagging feature that uses facial recognition technology invades users’ privacy.
U.S. District Judge James Donato allowed the case to move forward against Facebook under an Illinois law that bans collecting and storing biometric data without explicit consent.
“The Court accepts as true plaintiffs’ allegations that Facebook’s face recognition technology involves a scan of face geometry that was done without plaintiffs’ consent,” Donato wrote in Thursday’s ruling. When you are identified in a picture on Facebook, facial recognition software remembers your face so friends can tag you in photographs. The feature is called “tag suggestions” and it’s automatically switched on when someone signs up for Facebook.
Facebook says this helps users. Privacy advocates say the software should only be used with explicit consent. [and instructions on how to turn off this feature.] In Europe and Canada, where privacy concerns were raised, Facebook suspended use of the technology (but not suspended in USA). –USA Today
(Yes. Facebook is here twice)
Facebook has bias in delivering news. Report shows financial ties to media outlets, reporters.
Facebook has also acquired a more subtle power to shape the wider news business. Across the industry, reporters, editors and media executives now look to Facebook the same way nesting baby chicks look to their engorged mother — as the source of all knowledge and nourishment, the model for how to behave in this scary new-media world. Case in point: The New York Times, among others, recently began an initiative to broadcast live video. Why do you suppose that might be? Yup, the F word. The deal includes payments from Facebook to news outlets, including The Times.
Yet few Americans think of Facebook as a powerful media organization, one that can alter events in the real world. When blowhards rant about the mainstream media, they do not usually mean Facebook, the mainstreamiest of all social networks. That’s because Facebook operates under a veneer of empiricism. Many people believe that what you see on Facebook represents some kind of data-mined objective truth unmolested by the subjective attitudes of fair-and-balanced human beings. –New York Times
Dave Coplin, chief envisioning officer at Microsoft UK, told an audience of business leaders at an AI conference that [Artificial Intelligence] is “the most important technology that anybody on the planet is working on today.” Before we go any further, it’s worth putting that claim into perspective.
There are a number of consumer-facing AI products already out there on the market that are getting better all the time – Microsoft’s Cortana, Amazon’s Alexa, and Apple’s Siri. But AI also has the potential to support crucial scientific research into everything from autonomous cars to cancer research. That’s probably why Coplin made his claim.
Coplin, who has worked in IT for over 25 years and authored two books about the future of technology, said: “This technology [Artificial Intelligence] will change how we relate to technology. It will change how we relate to each other. I would argue that it will even change how we perceive what it means to be human.”
He highlighted that a plethora of companies are now doing their own research into AI. “It’s not just Microsoft, Google, and Facebook. We’re all at it because it will change everything.”
But Coplin argues that societies need to start paying more attention to who is developing AI as they can build it to behave how they want it to, which may not always be a good thing. “We’ve got to start to make some decisions about whether the right people are making these algorithms,” he said. “What biases will be inferred by those people, by those companies? These are things we don’t know about. This is new. We talk about unchartered territory.”
Developments in AI will bring a number of societal issues, according to Coplin. “We have to be ready to deal with them. We have to understand that they exist. We have to start being mindful about the processes we put in place.”
Scientists like Stephen Hawking and technology leaders like PayPal founder and Tesla CEO Elon Musk have warned that AI could pose a threat to humanity if it is developed in the wrong way, while Oxford professor Nick Bostrom believes superintelligent machines could turn against us if they outsmart us.
“The way in which we choose to use AI is a reflection of humans, the people, not the machines themselves,” Coplin of Microsoft said. -Business Insider
Speaking of human. A note of gratitude and hope.
Change is inevitable and often, even good. However, we live in a world where technology has outpaced us, where Futurists predict school in the womb and curriculum based on your DNA, an Artificial Intelligence avatar who knows all about you, as your personal “Data Sherpa”- personalized, machine learning, Billion-student online schools without walls, our country without state borders and well, much more… In the midst of all this change, we mustn’t forget to treat each other with basic human respect, ethics, and compassion.
I would like to thank those of you in the world who still remember what it means to be human and decent. Thank you to those who stand up to pressure, stand up for children, and still do what you know is good and kind and moral. Colorado just passed a student data privacy bill, which while it doesn’t prohibit data collection, doesn’t give parent consent, this was definitely a hard fought win. What we did get is a novel approach and well deserved transparency for the parents and children of Colorado. Again, thank you to the decent and brave elected officials of Colorado, and elsewhere, who are human.