It a world of conflicting values, it’s going to be difficult to develop values for AI that are not the lowest common denominator.
Archive (Page 1 of 3)
[The] question of what happens when blackness enters the frame can kind of neatly encapsulate the ways I’ve been thinking and trying to talk about surveillance for the last few years.
What does it mean for human rights protection that we have large corporate interests—the Googles, the Facebooks of our time—that control and govern a large part of the online infrastructure?
Are there any limits to the connected workplace? Are there any concerns about the connected workplace? Is there any way in which you wouldn’t want either yourself or an employee to be connected? Are there any limits to the kinds of information we can gather in order to make our workforces more productive? In order to make our overall society more productive?
When I announced the talk on Twitter, somebody immediately was like, “Lawful abuse, isn’t that a contradiction?” But if you think about it for just a moment it might seem to be a little bit more clear. After all, the legality of a thing is quite distinct from the morality of it.
Sure, cyberspace is about people and data. But it is also about applications. And devices. And the indirect and non-obvious relationships between all of this. It creates a very complicated and exciting ecosystem. One that is capable of dramatic innovation, and dramatic exploitation.
The Soviet experience suggests something really important for us today, which is that networks are entirely compatible with surveillance. And many of our favorite things to talk about, then, peer-to-peer production, or end-to-end intelligence, kind of missed the point that I think is now obvious. That whether you’re the NSA or Google or whoever else…you’re a general secretariat, seeking to privatize our power, and you are surveilling us, because you have a network in place.