Wednesday, May 25, 2011

How the Internet Affects the Give-and-Take Between Candidates and Reporters...

In media studies, political campaigns are often viewed as strategic contests between candidates and reporters - candidates seek the free publicity while reporters seek to maintain their autonomy and not simply echo stump speeches. How will the Internet affect this dynamic in 2012?

This is the question tackled in a new article by Shanto Iyengar, "The Media Game - New Moves, Old Strategies" (The Forum: Vol. 9: Iss. 1, Article 1). Iyengar highlights how political candidates have held the advantage over reporters since the 1980s, developing intricate strategies to use or evade the media to their benefit, while reporters have been slower to adapt to new media technologies in order to protect their independence.

Iyengar suggests that the Internet has only strengthened the candidates' hand.

Although candidates can do little to persuade reporters to cover their speeches at length, they are in position today to accomplish an end-run: information technology provides them with a means of bypassing the media and reaching voters directly. At trivial cost, candidates can deposit their speeches, press releases, campaign ads, testimonials, and anything else they consider relevant on their websites (Druckman et al. 2009). As bandwidth has become more plentiful and video-compression technology more advanced, the content of these websites features a rich array of multi-media presentations designed to attract and hold the user’s attention.

The advent of video sharing technology and the rapid growth in the reach of social networking sites thus opened up vast new possibilities for direct candidate to voter communication. Moreover, new media platforms often provide the campaigns with precise data concerning the background and interests of their users, making it possible for the candidates to “target” pre-defined groups of voters with messages designed to resonate with their interests and policy preferences. As technology has diffused and more Americans spend significant amounts of time online, the audience for online news gradually approaches the audience for television news.


None of this is new or revealing information. However, its import, taken collectively, has serious consequences when viewed through a journalistic lens. Candidates can now bypass the media completely and communicate directly to voters. Reporters who have traditionally acted as filters for the public, deciding what was important and what was not, are now relegating to an even lesser role. They've become the obsolete middle-man.

Or so the argument goes. Skeptics might point out that the Internet's myriad of proliferating voices are precisely why those filters are more crucial now than ever before, and reporters' original purpose of seeing through the political spin to maintain independence and autonomy is certainly as valid today as it was at any other time in history.

Perhaps Iyengar's most interesting contribution, then, is in this concluding point... Rather than waiting for news organizations to report on the policies they care about, technology enables voters to examine candidates' positions on all issues whenever they feel like it. Paying attention to the issues - instead of the media circus that too often centers around the more entertaining facets of the campaign like the "horse race", the advertising, the strategy, and scandalous behavior - means that voters could potentially become more issue-oriented. And that's a good thing.

Ironically, the role of the Internet in the upcoming 2012 elections might be to exacerbate the growing divide between, what are increasingly, two very distinct electorates in American politics... those who seek to wade through all the spin and muck to actually vote on the substantive issues, versus those who are eager to get caught up in it.
  

Monday, May 23, 2011

The Latest Obama Cybersecurity Plan Looks Awfully Familiar...

In case you missed it, the Obama Administration just unveiled its new cybersecurity plan for the nation. Many in the technology industry are showering it with praise, but is that praise warranted?

Nope.

It's not necessarily that the Obama plan is bad or misguided, it's just that it's not a significant departure from previous incarnations of U.S. national cybersecurity policy.

What supporters are rallying around is the plan's call for a new cybersecurity coordinator who will answer directly to the president, the designation of cybersecurity as a "key management initiative", the development of better metrics for improvement, and investments in education and R & D.

All of which sounds great; and all of which has been done before. Ever since the Bush Administration's National Strategy to Secure Cyberspace policy document in 2003, there have been several new cybersecurity coordinator positions created within the Executive Branch, metrics have been implemented, and there have been repeated calls for more education and R & D.

Above all, both the Bush and Obama plans focus on voluntary public-private partnerships as their core ideological tenet.

The "Action Plan":

  1. Appoint a cybersecurity policy official responsible for coordinating the Nation’s cybersecurity policies and activities; establish a strong NSC directorate, under the direction of the cybersecurity policy official dual-hatted to the NSC and the NEC, to coordinate interagency development of cybersecurity-related strategy and policy.

  2. Prepare for the President’s approval an updated national strategy to secure the information and communications infrastructure. This strategy should include continued evaluation of CNCI activities and, where appropriate, build on its successes.

  3. Designate cybersecurity as one of the President’s key management priorities and establish performance metrics.

  4. Designate a privacy and civil liberties official to the NSC cybersecurity directorate.

  5. Convene appropriate interagency mechanisms to conduct interagency-cleared legal analyses of priority cybersecurity-related issues identified during the policy-development process and formulate coherent unified policy guidance that clarifies roles, responsibilities, and the application of agency authorities for cybersecurity-related activities across the Federal government.

  6. Initiate a national public awareness and education campaign to promote cybersecurity.

  7. Develop U.S. Government positions for an international cybersecurity policy framework and strengthen our international partnerships to create initiatives that address the full range of activities, policies, and opportunities associated with cybersecurity.

  8. Prepare a cybersecurity incident response plan; initiate a dialog to enhance public-private partnerships with an eye toward streamlining, aligning, and providing resources to optimize their contribution and engagement

  9. In collaboration with other EOP entities, develop a framework for research and development strategies that focus on game-changing technologies that have the potential to enhance the security, reliability, resilience, and trustworthiness of digital infrastructure; provide the research community access to event data to facilitate developing tools, testing theories, and identifying workable solutions.

  10. Build a cybersecurity-based identity management vision and strategy that addresses privacy and civil liberties interests, leveraging privacy-enhancing technologies for the Nation.



Sure, there are a few subtle changes in this new Obama plan. The fact that the new cybersecurity coordinator position will be part of both the National Security Agency (NSA) and National Economic Council (NEC) is symbolically significant. But overall, these changes are mostly bureaucratic in nature. The main philosophical driving force behind the policy looks awfully familiar.

There is a reason for that. The vast majority of the Internet is comprised of privately owned and operated computer networks. This means that the vast majority of cybersecurity defense must take place in the private sector, and that the federal government is extremely limited in its capacity to affect meaningful change.

What the federal government should be held directly responsible for is 1) protecting the Internet's physical infrastructure (the actual wires and cables connecting networks and devices) within U.S. territorial borders and 2) safeguarding the digital information within its purview (like military intelligence and Social Security data) from outside intrusion.

Those two things ought to be the federal government's primary cybersecurity focus because those are the two things which it actually has some control over. Everything else in this discussion - like encouraging voluntary public-private partnerships, education, R & D, public awareness campaigns, initiating a national "dialogue", etc. - sounds nice and warm and fuzzy, and is definitely needed, but will inevitably produce rather limited results.
  

Wednesday, May 11, 2011

Microsoft's Desperate Acquisition of Skype...

When news broke yesterday that Microsoft is acquiring Skype for $8.5 billion, the business and technology communities were all abuzz. It's certainly gigantic news in the industry, but how much will it really matter?

The prevailing wisdom, not to mention Microsoft's great hope, is that the acquisition will "increase the accessibility of real-time video and voice communications, bringing benefits to both consumers and enterprise users and generating significant new business and revenue opportunities".

In plain English, they're hoping that Skype will finally give Microsoft a real foothold in telecommunications - something that they've been pursuing for the last decade but that has thus far evaded them.

My money's betting that their desired foothold will remain elusive. From a consumer perspective, it'll be a nice additional feature to be able to use Skype in conjunction with Microsoft Outlook or with XBox, but how much will that ultimately transform the telecom landscape, really? What Microsoft needs in order to gain that ever-precious foothold is to finally develop a mobile operating system that seriously rivals Android and the iPhone. Acquiring Skype is not a realistic substitute for that.

Meanwhile, the rumors circulating last week about Skype potentially being acquired by either Facebook or Google appear to have been leaked in order to drive up the price Microsoft would have to pay. Either that or else the rumors were legitimate and Microsoft swooped in at the last minute fully doubling what the others were offering. In either case, the end result is that Microsoft may have overpaid by several billion dollars.

That's too bad. For the average consumer, Skype would have made the most sense being integrated with Facebook, whose IM feature - which has unbelievably heavy usage - could have gotten a major jolt in one fell swoop.

When it's all said and done, Microsoft has indeed bought itself a pretty great stand-alone asset - even if its revenues are somewhat lacking. The only problem is that acquiring Skype probably won't accomplish the main goal that Microsoft is acquiring it for.
  

Thursday, May 05, 2011

Twitter Prohibits Research on Osama Bin Laden Tweets...

Here's a quick news story that should be tossed into the "ridiculous" category.

Academic researchers are constantly data mining social media websites to collect information. This can be extremely useful in analyzing trends and other metrics.

So after the news broke on Sunday night that Osama Bin Laden had been killed, some researchers thought it might be valuable to analyze the thousands of tweets referring to the story. I was personally emailed a link to an archive of such tweets in XML format to be used in conjuction with DiscoverText software.

The datafiles were samples taken from live feed Twitter imports starting shortly after the announcement that Osama bin Laden’s death.

  • Twitter searches for "bin laden" (647,585 documents, 505 MB)
  • Twitter searches for "osama" (586,665 documents, 451 MB)


This was all for research purposes, however Twitter quickly shut down the project citing their Terms of Service (TOS) Agreement.

I was notified of the shut-down in a follow-up email that reiterated...

To be clear: we were giving the data away, not selling it. Also, it was not scraped of Twitter. Rather, it was gathered using a Twitter-authorized account and an API that lets us fetch 1500 items at a time. It is a shame that the now 2 million tweets cannot, for example, be sampled and coded using a crowd source model.


Stuart Shulman is exactly right - this is prime historical data and there is no conceivable reason why Twitter would need to prohibit the aggregation of such data for non-commercial research purposes.

Someone over there needs to implement a little common sense.


Fascinated by policy and law? Learn more at law degrees online.
  

Wednesday, May 04, 2011

How Code Restrains Programmers...

When it comes to understanding power and authority in cyberspace, there is one guiding principle - "code is law".

At a fundamental level, code grants computer programmers the ability to shape the virtual environments in which the average user interacts. Code empowers these programmers to design the rules of behavior - rules which can either constrain or enable different types of behavior. As opposed to laws in real-space, these policies created by code are not simply a matter of establishing a law and enforcing it by creating penalties for people who break the law; rather, it is an altogether different type of law, one in which the environment itself is created to deny the user even a capability to act in defiance.

Not only is it through code that programmers have such significant governing power insofar as they shape their virtual environments in this way, but code also dictates the actions of those very same programmers and places constraints on them. In other words, even programmers, with all their powers in the Digital Age, must still adhere to sets of rules that have previously been established. Someone else has authority over them, and that too is an authority derived from code.

The first constraint that acts over programmers is language. While it may be something of blasphemy within certain circles of the programming community to make this assertion, it nevertheless holds true. A C++ or .NET programmer may believe he can write code to do whatever he wants on a technical level, however that is only true within the confines of what Microsoft decided C++ as a language would allow. The designs of all computer programming languages are the result of explicit decision-making processes, often by formal institutions, and those decisions ultimately have consequences on the resulting decisions of those who implement them. In other words, the capabilities and limitations of programming languages act as inherent checks on the behavior of programmers.

This principle also raises the second constraint on programmers – the computing platform. Many programmers might reluctantly concede to the above assertion, but they will then undoubtedly point to development technologies that are not controlled by private commercial firms. For example, a web programmer might argue that if he contributes to the development of a non-proprietary open-source language like PHP, Perl, or Python, then language becomes less of a restriction because he can have a hand in shaping it. However, the programming language is not the only constraint on the programmer. Even if one created an entire programming language from scratch, the behavior of the programmer would still be determined by the platform on which the resulting software would be used. The code behind such platforms, whether an operating system like Microsoft Windows, non-OS-dependent platforms like Sun’s Java, or various “application programming interfaces” like the Google API, also either constrains or enables the behavior of programmers. In other words, a programmer’s code, no matter how independent, must still be written within the confines of rules established by the platform if it wants to achieve a reasonable level of operability.

Most soon-to-graduate computer science students are, by now, familiar with how code empowers them to create the rules for different environments. They should just also bear in mind how code simultaneously restrains them as well.



Passion for IT? Consider taking an IT online course.