Tuesday, August 31, 2010

The Web Is Dead... Really?

The cover story for Wired Magazine this month, proclaiming that "The Web Is Dead", is clearly designed to be provocative. It's also true.

Here's why. If you actually read the argument, you'll understand that what the authors mean is, not that the internet is dead, but that the free and open Web - in the collectivist utopian sense - is dead. This is an important distinction. Even though they often get conflated, the internet and the Web are two different things. The internet refers to the network; the Web refers to one application on that network where we browse publicly accessible pages and documents. The authors argue that the network isn't going anywhere (in fact, they say it's become as crucial to modern life as electricity). It's the open Web that is fading into twilight.

Granted, this is a bit of exaggeration. Nobody in their right mind believes that Google is suddenly going the way of the dinosaurs anytime soon. However, Chris Anderson's point is that the Web we grew accustomed to in the 90s has gradually been replaced. And, importantly, it's been us who have voluntarily chosen to replace it...

Over the past few years, one of the most important shifts in the digital world has been the move from the wide-open Web to semiclosed platforms that use the Internet for transport but not the browser for display. It’s driven primarily by the rise of the iPhone model of mobile computing, and it’s a world Google can’t crawl, one where HTML doesn’t rule. And it’s the world that consumers are increasingly choosing, not because they’re rejecting the idea of the Web but because these dedicated platforms often just work better or fit better into their lives (the screen comes to them, they don’t have to go to the screen).


To support this argument, the following chart on internet traffic is provided showing how less and less internet traffic is devoted to Web surfing...



The main point to take away from all of this is that we, as information and media consumers, have gradually chosen to shift our preferences away from elements of the open Web and towards the so-called "walled gardens" of the internet - iPhone apps, Facebook, Skype, Netflix, XBox, etc. These are all examples of utilities that are delivered over the internet, but which are not publicly accessible unless you either pay for them or at least agree to what are often stringent terms of service.

These walled gardens are described by scholar Jonathan Zittrain as being a dangerous thing. They lead to "a loss of open standards and services that are 'generative' - [meaning that they] allow people to find new uses for them. The prospect of tethered appliances and software as service permits major regulatory intrusions to be implemented as minor technical adjustments to code or requests to service providers."

Basically, it means the Everyman can no longer tinker with what's out there - and that is the most fundamental defining characteristic of the early Web. As the trend towards proprietary walled gardens continues, it is a value that is increasingly lost.

The reaction to the article in cyberspace has been, predictably, a passionate one. Prominent blogs like Boing Boing question the aforementioned graph on internet traffic, TechCrunch predicts that people will inevitably become "overwhelmed" by apps and return to the browser (yet doesn't explain how or why), and Gawker highlights several hypocrisies of the article, such as how Wired published the cover story first to the Web, not through its iPad app, apparently because the editors still believe "it pays better to deliver that news via a dying medium".

While so many digerati are outraged, feel defensive, and are, quite possibly, in denial about the "Web is Dead" argument, that doesn't necessarily make it untrue. It seems foolish to deny the power of apps in the current internet environment, and it doesn't seem like that's going to change anytime soon. Most likely, the power of apps will only increase over time, rather than diminish, particularly as smartphones become ever more ubiquitous. To be sure, HTML pages, browsers, and blogs aren't disappearing off the face of the earth; they are also clearly here to stay, but they also now have to share the internet with their walled brethren who are usually backed by companies with significant resources.

The open Web may not be completely dead. But in an app-driven world, it's not nearly as far-fetched an argument as its critics would wish it to be.
  

Tuesday, August 24, 2010

Comparing Liberation Technology: Haystack vs. Tor...

There is a burgeoning market for hacktivist software that helps internet users evade surveillance. At the top of the list for many years now has been Tor, which enables people to surf the Web while masking their IP address, therefore making it extremely difficult for the authorities to identify them.

Tor now has a new rival. Haystack is a soon-to-be-released software program, still in beta, which also seeks to protect users' privacy, and is specifically aimed at providing unfiltered internet access to the people of Iran. Their stated hope is that, by enhancing Iranians' capacities for free expression and uncensored access to information, they will be encouraging "peaceful opposition" to the regime.

Hacktivist software like this is typically well-intentioned, but a few observations are warranted...

First of all, Haystack is not an ordinary proxy system. "It employs a sophisticated mathematical formula to hide users' real Internet traffic inside a continuous stream of innocuous-looking requests. In addition to providing anonymity, Haystack uses strong cryptography, ensuring that even if users' traffic is detected, it cannot be read."

Second, Haystack is different than Tor. Tor focuses on using onion routing to ensure that a user's communications cannot be traced back to him or her, and only focuses on evading filters as a secondary goal. Tor also uses standard SSL protocols which make it easy to block, especially during periods when the authorities are willing to intercept all encrypted traffic. Haystack, on the other hand, gives primary attention to encryption that will help users evade filters. In fact, to a computer, a user using Haystack appears to be engaging in normal, unencrypted web browsing, which raises far fewer suspicions. Also, unlike Tor, Haystack has no public list of servers, which makes it exceptionally difficult for the authorities to discover which machines to block.

Third, Haystack is NOT open source. This might come as a surprise to some, but Haystack's counter-argument is that...

Although we sincerely wish we could release Haystack under a free software license, revealing the source code at this time would only aide the authorities in blocking Haystack. In the future, however, we would like to find a way to reconcile our Free Software ideals with the necessity of frustrating the efforts of those who would block Haystack.


This seems somewhat counter-intuitive to those of us familiar with open source software. In fact, the Haystack group themselves go on to say that "it would take centuries for all the world's computers to decipher one of our users' browsing sessions even with full access to the Haystack source code."

Regardless, it's a positive development to see that the much-beloved Tor is starting to see some viable competition. These aren't commercial products, so rather, what we're talking about is competition in the marketplace for non-profit activist software. But just as in the commercial marketplace, the more competition, the better the product and the greater the innovation.
  

Sunday, August 22, 2010

Agenda-Setting in the Digital Age: The Case of Proposition 8...

The question as to how issues make it onto the national political agenda, and why others do not, is one that academics have been theorizing about for years. To what extent does the media shape public opinion and mobilize popular support for specific issues? Traditional media certainly plays an important role in shaping political agendas, but how is social media on the internet now altering that dynamic?

This can go in several directions. Either 1) traditional news outlets remain the primary catalyst, leading the way for online social media to follow; 2) the reverse is true and online social media is leading the way for traditional new outlets to follow; or 3) "communication is a process" and they mutually influence each other in highly complex arrangements.

Somewhat predictably, a recent academic paper argues for the third option of complexity. Titled, "Agenda Setting in a Digital Age: Tracking Attention to California Proposition 8 in Social Media, Online News and Conventional News" (Sayre, Ben; Bode, Leticia; Shah, Dhavan; Wilcox, Dave; and Shah, Chirag), in the Policy & Internet journal, the authors conducted a case study using the Proposition 8 issue in California to see whether traditional news outlets led the way, or whether online social media - specifically, YouTube - acted as the primary catalyst for subsequent coverage.

What they found was a direct correlation that showed traditional newspapers in California clearly leading both YouTube and Google News coverage of the issue. In other words, newspapers would first report on it, then online media would react. This is an important result in its own right, however, complicating the data is the significance of timing. As the authors responsibly acknowledge, traditional news outlets clearly led the way before the November 2008 election, but afterwards, and especially during the period surrounding the 2009 California Supreme Court decision, YouTube videos were clearly a much better predictor of both newspapers and Google news coverage.

What does this mean? Basically, since the number of YouTube videos actually increased in the aftermath of the election, while the newspapers' attention to the issue faded, the authors conclude that YouTube was being used as a platform for people to register opinions that they felt were not being represented in the mainstream. Indeed, it was opponents of Proposition 8 who accounted for nearly all of the activity on YouTube following the election. The lesson of social media in the Prop 8 case, then, is that people strongly identify with, and become more active in expressing, a minority position when that cause’s prospects take a turn for the worse.

Thus, traditional news outlets continue to be the primary catalyst for agenda-setting, while online social media serves better as a protest platform; or venue for expressing contrarian perspectives.

Agenda-setting is important because it not only dictates how readers learn about a given issue, but also how much importance to attach to that issue from the amount of information in a news story and its position. Online social media is clearly a part of this modern equation, but it hasn't usurped the influence of traditional news outlets just yet.
  

Friday, August 13, 2010

The BlackBerry Ban and the "Right of Free Use"...

Two weeks ago, when the United Arab Emirates declared that they would be banning all BlackBerries in their country unless the device's maker, Research In Motion (RIM), would grant that foreign government access to encrypted e-mails sent and received by BlackBerry users, an uproar rightfully ensued. The UAE, soon followed by Saudi Arabia, Lebanon, and India, all argued that they needed to be able to monitor people's messages in the interests of national security.

RIM ought to be applauded for holding strong and maintaining even some semblance of user privacy rights.

As Hillary Clinton and the U.S. State Department immediately made clear, BlackBerry bans violate a "right of free use". Furthermore, such bans would only be a first step in a process that would erode privacy everywhere. As CNN reported, if the UAE ban holds up, you can expect even more foreign governments to feel emboldened and quickly follow suit.

Now, to be certain, the issue is more complex than many reactionaries in cyberspace are giving it credit. There are legitimate security concerns for which BlackBerry encryption has become an obstacle. As Richard Falkenrath, a former U.S. deputy homeland security advisor, wrote in an op-ed, among American law enforcement investigators and intelligence officers, the Emirates’ decision met with approval, admiration and perhaps even a touch of envy. The men and women who make a living hunting terrorists, smugglers, and human traffickers rely on exactly this type of electronic surveillance to keep the rest of us safe.

In fact, in the United States, telecommunications providers are generally required to provide a mechanism for such access by the Communications Assistance for Law Enforcement Act of 1994 and related regulations issued by the FCC. As a general principle, information-service providers here must provide a means for federal agencies, usually the F.B.I., "to view the ostensibly private data of their subscribers when lawfully ordered to do so".

However, the problem with the current BlackBerry ban is that it's highly questionable, to put it lightly, whether hunting terrorists is the main objective for the countries involved. While it shouldn't entirely be dismissed, it's far more likely that, say, the government of China is interested in using BlackBerry monitoring to crack down on its citizens' free speech rights than it is in fighting Bin Laden. And even when intentions are genuine, putting such an insecure architecture in place makes it ripe for future abuse.

In other words, there would be no putting the genie back in bottle. And the trust level for the authoritarian governments in question is hardly inspiring.

This is the battle that's defining our time. National governments are struggling to retain control in an increasingly borderless internet-enabled world, and privacy is frequently in conflict with security. RIM has thus far acted nobly in defiance of the BlackBerry ban, just as Google similarly held out against China a few months ago, but ultimately this battle is still in its early stage, and the end-result will be nothing less than a re-determination of what the internet itself will be.
  

Tuesday, August 10, 2010

Is the Net Neutrality Plan by Google and Verizon For Real?

Last week, in response to this NYTimes article which claimed that Google and Verizon had struck a back-room deal whereby Google would pay millions to have its content delivered faster over Verizon's network relative to other websites, effectively killing the principle of net neutrality, an incredible amount of mud-slinging and blog-flaming ensued.

Now, news reports indicate that this back-room deal may not have taken place at all; or at least not in the way in which it was originally portrayed. Alan Davidson, Google director of public policy, and Tom Tauke, Verizon executive vice president of public affairs, policy, and communications, have posted a joint statement on Google's Public Policy Blog that aims to clear up any confusion about the two companies' recent discussions.

In it, they lay out the policy framework that they've submitted to the FCC which, on its surface, appears to support net neutrality principles for landline internet service, though carves out exceptions for wireless broadband. This public statement of support for net neutrality is encouraging, but the surprising extent to which Verizon was willing to go almost breeds skepticism.

Read it for yourself here.

Meanwhile, as maniacal bloggers continue to rant and rave on the subject, a more sober collection of viewpoints can be found on the NYTimes website, where established leading scholars have shared their more serious reflections.

Different sides of the debate are presented, which is always a good thing, especially in an issue area where most people are largely uneducated about the nuances. The point-of-view that resonates most is that of Colombia professor Tim Wu who notes that firms like Verizon and Google have such power that, if they were to strike a deal that effectively killed net neutrality, they would then be able to decide what firms succeed or fail -- by making sites load faster or slower, or end up on page 10 of search results.

"The greatest danger of the fast lane is that it completely changes competition on the net. The advantage goes not to the firm that's actually the best, but the one that makes the best deal with AT&T, Verizon, or Comcast. Had there been a 2-tier Internet in 1995, likely, Barnes and Noble would have destroyed Amazon, Microsoft Search would have beaten out Google, Skype would have never gotten started -- the list goes on and on. We'd all be the losers."

If the joint policy framework that Google and Verizon submitted to the FCC is genuine, both in content as well as intent, then that would be a strong showing of support for net neutrality indeed. If it's a smokescreen, then it's business innovation and consumers who will ultimately suffer the repercussions. Either way, one thing is certain... without legislation that officially protects net neutrality, it's only a matter of time before some back-room deal does emerge.
  

Sunday, August 08, 2010

Bot Politics: The Domination, Subversion, and Negotiation of Code in Wikipedia

Catching up on some videos from this year's Wikipedia Research Conference, the most interesting piece was presented by Stuart Geiger titled, "Bot Politics: The Domination, Subversion, and Negotiation of Code in Wikipedia".

The summary...

Recent research in the field of critical software studies has placed much attention on Wikipedia's software infrastructure, focusing on fully-automated bots, semi-automated tools, and other technological actors essential to Wikipedia's normal operation. This research trajectory has clearly demonstrated that such systems have significant sociocultural consequences for Wikipedia. However, this paper gives an alternative view by showing how these software agents are contested and negotiated. Specifically, I analyze the case of a bot created to enforce what was thought to be a near-universal norm: users should sign their comments in discussion spaces. However, this auto-signature bot was subverted by Wikipedian editors, and the ensuing conflict was only resolved by the creation of new standards that were at once social and technical limits on the behavior of humans and non-humans. Complicating the social and technological determinism's prevalent in software studies, this case illustrates that Wikipedia must be analyzed from a hybridized, sociotechnical perspective.




Geiger's main point is something that I have conducted much research on myself... that the creation of software code is inherently political. It can enable or limit certain types of behaviors, and thus the battle over what type of code will be written must inevitably involve decisions over political ideologies and architectures of control.

Overall, a great presentation.

A few other personal favorites from the conference:


A full list of presentations can be found here.