Blog

Facepile Live Stream Activity Comments

The Greatest Surveillance in History

Dec 07 2010 Published by under info sharing, silos

photo of Dr. Eben MoglenThe Wall Street Journal has an interesting story about a rare moment of legislative censure. “In an unusual move, the House Subcommittee on Commerce, Trade and Consumer Protection asked a Columbia University Law School professor to censor his remarks in a hearing about online privacy legislation,” states WSJ author Jennifer Valentino-DeVries. Whose testimony was censored? Eben Moglen, Professor of Law and Legal History at Columbia University Law School, Chairman of the Software Freedom Law Center, and Director of the Software Freedom Conservancy.

Moglen’s testimony got to the heart of the problem of information sharing as it is now:

We already have a world where more than half a billion people put everything they say and do in one great big database owned by a single for profit business. […] How much surveillance is socially tolerable? How much are we prepared to abandon our traditional understanding that what we do in our daily life is nobody’s business except those with whom we choose to share?

Moglen’s prepared statement (PDF) is available at the Software Freedom Law Center and from the Wall Street Journal. His edited testimony (PDF) is available on the Committee’s website. If you’re interested in watching the whole 2+ hour hearing, you can catch it on C-Span or download it (WMV) from the Committee’s site. Note that Dr. Moglen’s testimony starts at 1 hour 37 minutes and ends at 1 hour 44 minutes.

Continuing from the Wall Street Journal,

Facebook spokesman Andrew Noyes confirmed that the company had seen a copy of Mr. Moglen’s prepared remarks before Thursday… Mr. Noyes indicated that Facebook had a problem with the written remarks from the start, saying Facebook was “surprised” to see that the remarks had “nothing to do with the topic of a serious and important hearing.”

The subject of the hearing was “Do-Not-Track Legislation: Is Now the Right Time?” The testimonies of other speakers are also available on the Committee’s website.

Moglen’s point, while evidently offensive to Facebook, seems right on topic, which is essentially a question of who gets to know what about whom:

Facebook holds and controls more data about the daily lives and social interactions of half a billion people than 20th-century totalitarian governments ever managed to collect about the people they surveilled.

Moglen’s written testimony–which triggered the censure–made it clear that he sees Facebook’s so-called “privacy settings” as outright deception. Although the settings give users control over what other users and applications can see, they do nothing to provide privacy from Facebook itself. This may seem so obvious it doesn’t get mentioned–that Facebook can see what users put on Facebook–but Moglen makes a convincing argument that it needs to be mentioned, precisely because it is a risk so many are ignoring.

It would be possible to engineer a solution so that Facebook can’t see everyone’s information.  Challenging, but possible.  Perhaps that’s in our future.

7 responses so far

  • But why 😉
    Or in other words: What are the actual dangers coming from Facebooks database?

    Besides from that: IMHO simple privacy settings like on Twitter (public/private) make more sense as there is not much to understand. Make it more finegrained and you are in trouble.

    • JoeAndrieu says:

      The problem is both the unknowable future uses that such information might be used for–specific but unknown future harms–and the innate loss of freedom such surveillance creates–a generic and immediate harm.

      Essentially unlimited surveillance (covering all aspects of our digital lives) is a loss of the sense of privacy that is vital to a functioning democracy. Our very sense of self is tied to our ability to control how we “present” to the world. To the extent that we give up that control to anyone–whether the state or a private corporation–is the extent to which we lose the ability to be ourselves, to think and act independent of what the powers that be might want.

      One of my favorite XKCD comics puts this rather poignantly: http://xkcd.com/137/

  • PXLated says:

    Surveillance – Guess that’s one way to frame it.

  • And can’t we solve the problem of future harms when they actually occur? I mean, we never know what the future brings, usually it’s more good stuff than bad stuff though.

    And surveillance is IMHO only a problem if a small elite has more information about others. If it’s equal I don’t see that many problems. Society can also adopt to the more open world, in fact it is doing so already and I again see more good than bad.

    • JoeAndrieu says:

      Maybe, but maybe not.

      Here’s a great quote from Ted Kaczynsky in his Unabomber Manifesto, which I picked up from Kevin Kelly’s What Technology Wants:

      When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it.

      We are establishing the norms for a whole new sphere of human behavior. And many, if not most of us, have very little understanding of what’s really happening behind the pretty curtain and what the risks are, which, IMO, means we lack the public discourse to fully judge and manage the risks of what’s going on. If a surveillance society is inevitable (or as Jeff Jonas says, it’s both inevitable and already here), then we need to figure out how we protect the freedoms that allowed us our current belle epoque before unintended consequences make that impossible.

  • But we always have and had little understanding of what a new technology means. Look at cars or the internet. We only see bit by bit what the risk but more so what the chances are. But we have been able to adapt to those risks and I doubt that any reaction can be planned anyway.

    Moreover we tend to see potential risks which never come true and we might miss problems we haven’t though of. And of course we always will strive for freedom, no matter what. So if facebook et al. are becoming problems we will react. I just don’t like those reactions in advance as it can also hinder the sometimes much bigger chances. And in fact I see this bigger openness of people as a big chance.

    The world grows together and we are living in a truly global village. And as a village in RL, that virtual village also means that we got to know more about each other. As long as this power is equally balanced I don’t see a problem.

    If companies uses their information in wrong ways there are still ways to react to this. But having that data also means that they can provide services which might not be possible if they cannot read that information themselves. Being able to read it btw also means being able to fight spam (I heard from a local company which is not allowed to read personal messages that they in fact have no way to deal with spam).

    So I at least would more like to see more efforts in preventing the misuse of information than in gathering it (with consent of course). I also would like more to discuss the underlying problems behind data protection (e.g. intolerance, not getting insurance etc.) as they won’t go away by building higher walls around data.

  • JoeAndrieu says:

    Christian, I agree. In fact, Kelly’s book touches on this as well: preventative approaches don’t work, we need “proactionary” approaches that continually test & respond to actual, deployed technology.

    We need to keep looking at the deployed systems, but I don’t think we need to experience all the potential harms. One doesn’t need to see how bad drunken flying would be once you see how bad drunken driving is.

    The issue is whether or not the specific harms we see in Facebook’s handling of user data can be corrected… and how best to do so.

    For example, clearly, automobiles cause a lot of harms–they are the number one killer of children in the US, for example–but we don’t outlaw driving, we regulate it.

    And, to our view, the first step in regulatory oversight–even if the choice is to limit regulation–is public understanding and discussion of the situation. The good, the bad, the uncertain.

    A big part of that is understanding how much risk there is in letting ANYONE have access to as much data as Facebook does, which is Moglen’s point. Once we acknowledge those risks, we can start discussing how to respond.

    To your point, I agree. The gathering isn’t the problem itself, it’s the likely misuse that’s the problem. And I do mean likely. It’s just a matter of time before someone abuses or misuses data accrued on the scale that Facebook is doing it. It could be a mistake, it could be systematic exploitation. But the inevitability is pure statistical probability.

    I think we’re in agreement that the solution lies not in “higher walls around data” but in moderating use of that data.

    Right now, Facebook’s privacy controls do almost nothing for that. We can neither control, nor even specify appropriate use, of the information we release through Facebook Connect. The only controls on usage are bundled into the Facebook Connect terms of service, which Facebook regularly changes.

    Do we need specific harms of the kind that garner news headlines before we argue for better control over both the access /and/ the use of our information?

Leave a Reply