Thursday, August 27, 2015

Can Google Rig Democratic Elections? Have They Already?

What influences voters is a central question in Political Science.  It is widely accepted that, to some extent, people's votes are influenced by the media, family, friends, income, education, and much more.  But last week, research psychologist Robert Epstein wrote a controversial piece in Politico detailing how Google trumps them all and could outright "rig" the 2016 election.  He boldly declares, "the Search Engine Manipulation Effect (SEME) turns out to be one of the largest behavioral effects ever discovered" and that it is "a serious threat to the democratic system of government."

Based on data collected in a research study, he asserts that Google's search algorithm - the way Google decides in what order to rank search results for a given term - can easily shift the voting preferences of undecided voters by 20% percent or more, and even up to 80% in some demographic groups - "with virtually no one knowing they are being manipulated".

His logic is based largely on the fact that 50% of the time Google users only click on the first two results, and that 90% of the time they never click beyond the first page of results.  Therefore, if someone searched for the term, "Chris Christie", for example, whether the algorithm listed on the first page negative stories about the Bridgegate scandal or positive stories about New Jersey's improved budget during Christie's tenure as governor, this would influence undecided voters by 20% or more.  And surely, 20% would be enough to swing an election in a candidate's favor.

Epstein even suggests that if campaigns stopped flooding the airwaves with media blitzes that cost a fortune in the weeks before an election, and instead focused simply on finding "the right person at Google" who would tweak the algorithm their way, that would have far more of an effect in turning swing voters.

Afraid yet?  That seems to be the point.  But there are a host of reasons why not to take this conspiracy theory for face value.

First, the point that Google's search algorithm is influential is virtually indisputable, but so what?  Google has for a long time now acted as a gatekeeper, or filter, for what information people ultimately access.  It is not a censor of content, but its rankings basically function as one.  However, it is a huge leap to conclude that "the right person at Google" could decide "which candidate is best for us" and "fiddle with search rankings accordingly".  Outlandish as this may seem, Epstein himself states that this is a "credible scenario under which Google could easily be flipping elections worldwide as you read this".

It doesn't get much more conspiracy-minded than that.

For their part, Google senior vice president Amit Singhal responded directly to Epstein's allegations, stating:
There is absolutely no truth to Epstein’s hypothesis that Google could work secretly to influence election outcomes. Google has never ever re-ranked search results on any topic (including elections) to manipulate user sentiment. Moreover, we do not make any ranking tweaks that are specific to elections or political candidates.

Second, there is the not-so-small matter of causality.  Epstein suggests that undecided voters click on the first few links about a candidate and make their decision who to vote for based on what they see.  However, what he overlooks is that the opposite is also true.  As people surf the web, and blog, and tweet, and link to stories about candidates, they are the ones determining which links will come up first in the search results.  In other words, people are influencing the algorithm as much as, if not more than, the algorithm is influencing them.

In this way, many political activists across the ideological spectrum have long sought to game Google's algorithm to their preferred candidate's advantage.  But vocal activists trying to influence people's votes is hardly a new phenomenon and, when they succeed, it counters the notion of top-down algorithmic control.

Third, Epstein's study recorded undecided voters' preferences after they were exposed to stories listed in search rankings.  But how much staying power did those preferences have?  It would be instructive to know how those undecided voters actually voted on Election Day, and not just who they said they intended to vote for when Election Day eventually rolled around.

Fourth, there is the completely unfounded argument that "Google’s search algorithm, propelled by user activity, has been determining the outcomes of close elections worldwide for years".  That is flatly absurd, unless it is qualified with a statement that so has television, radio, newspapers, and virtually every other form of modern media.

Following that thread, use your own judgment to evaluate Epstein's claim that "it's possible that Google decided the winner of the [2014] Indian election. Google’s own daily data on election-related search activity... showed that Narendra Modi, the ultimate winner, outscored his rivals in search activity by more than 25 percent for sixty-one consecutive days before the final votes were cast. That high volume of search activity could easily have been generated by higher search rankings for Modi."

Is anyone convinced by that?  Isn't it at least possible that higher search activity actually caused the higher search rankings?  (Hint - that's the way the search algorithm actually works.)

Again, no one disputes the influence that Google search rankings have on a whole host of topics.  This is why an entire industry called SEO has come into existence - to game the algorithm for marketing purposes. And, yes, political campaigns have tried, and will continue to try, to game the algorithm to their preferred candidate's benefit.  However, charging Google executives with "rigging" elections (Epstein's wording, not mine) is grossly irresponsible.


  

Tuesday, August 11, 2015

How to Turn Your Old Computer into a Web Server (for free)...

Years ago, I published a paper titled, "The Configuration and Deployment of Residential Web Servers".  In retrospect, it wasn't the sexiest title, but the idea remains as relevant today as it was then.  For the Internet to remain open and embody democratic values, power needs to be decentralized.  For anyone looking to do something proactive about it, one easy (and free) way of doing so is to turn your beat-up old computer into a fully functional Web server;  the idea being that hosting Web content yourself means that others have less control over what get published.

So, in a brief attempt to update my old paper, here are the necessary steps for turning your old computer into a Web server...

  1. Download this MSI file which contains the Apache Web Server software.

  2. Double-click the MSI file to begin installing with the wizard.  Keep all of the defaults.  This should install the Apache Web Server to your "C:\Program Files" directory.

  3. Open the folder "C:/Program Files/Apache Software Foundation/Apache2.4/conf".  Open the file named "httpd.conf" in Notepad.  Scroll down and make sure that the following line is included (change the path to wherever you saved Apache if necessary):  

    • DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.4/htdocs"

That should be it.  To test it, open up your web browser (Firefox, Chrome, etc.) and go to the following URL:  http://127.0.0.1.  That should bring you to a web page that simply says, "It Works!".  You are looking at the "index.html" file inside of your Apache/htdocs folder.

To actually share the contents of the Apache/htdocs folder on the Web, you need to setup Dynamic DNS.  Long story short, because your home ISP changes your IP address frequently, you'll need to download some free software, called a DUC client, to update it automatically.  I recommend using NoIP.com.  Then you'll need to setup Port Forwarding on your router by opening up port 80.

Any files you save to the Apache/htdocs folder will now be immediately published on the Web.  Not only do you have total control over any websites that you want to create, but you are also serving a higher democratic purpose.


  

Tuesday, July 21, 2015

The Courts Will Decide the Fate of Uber and the Gig Economy. For Better or Worse.

It's impossible to miss all of the news coverage this week related to Uber.  New York City Mayor Bill de Blasio is supporting legislation that will cap new-hire licenses for Uber drivers.  Nearly all of the presidential candidates are also chiming in with their view of whether Uber represents a model example of entrepreneurship or an example of how deregulation leads to wage stagnation hurting middle-class workers.

At the heart of the debate are these two fundamentally transformative questions:  Are the people who work for Uber employees or independent contractors?  Does Uber, the company, simply put a tool out there and act as a middle-man, or do drivers functionally work for them?

The relevant statistic:  according to the New York Times, over 160,000 Americans depend on Uber for at least part of their livelihood.  The company directly employs fewer than 4,000 of them.

This is the culmination of a phenomenon known as "the gig economy".  As more individuals have moved toward freelancing, contracting, or temping work, the result has been an existential shift in the nature of employment itself.  1099s are becoming nearly as common as W2s.

The argument in favor of the gig economy is that it fosters entrepreneurship and innovation.  It circumvents entrenched interests and monopolistic arrangements, like that of the medallion taxis in New York City.  It offers more opportunities for people to make money, while reducing the role of regulations and bureaucracy that might act as a roadblock.

On the other hand, the argument against the gig economy is that it leads to income stagnation and economic insecurity for the middle-class.  Re-classifying workers as independent contractors, critics argue, is simply a way for companies to avoid offering workers the protections and benefits they are entitled to under the law, such as abiding by the minimum wage and offering workers compensation for injuries suffered on the job. 

The legal classification between employees and independent contractors is, and ought to be, determined by the courts.  The key question  that courts must answer is "whether a worker is economically dependent on the employer or in business for him or herself".  The courts typically use a six-part test revolving around the following types of questions:

  1. Is the work being performed an integral part of the employer's business?
  2. How much control does the employer exert over workers?
  3. Is the relationship between the two parties permanent or open-ended?

Where do things currently stand?  Thus far, the conflict is being resolved primarily at the state and local level.  Just last month, the California Labor Commission ruled that an Uber driver was, indeed, an employee deserving of a variety of workplace protections and was not, as the company maintained, and independent contractor.  And, of course, New York City will decide the fate of de Blasio's proposed caps this week.

However, the larger point here is that the implications of the gig economy go far beyond Uber.  For example, in 2009, the Labor Department sued Cascom for misclassifying workers as independent contractors, and a judge ruled against the company in 2011, awarding nearly $1.5 million in back wages and damages to roughly 250 workers.  In another example, just last month, FedEx agreed to pay $228 million to settle a class-action lawsuit brought by truck drivers who also challenged their classification as independent contractors.

How "employee" will be defined in the gig economy will have major ramifications for every business model in the years to come - from WalMart carpet installers, to New Media and citizen journalists, to Amazon affiliates, to even the most popular everyday Facebook or Twitter users posting revenue-generating content on social media sites.

In this new frontier, the hope is that, however the courts ultimately decide, they decide soon.  Some detailed guidelines would be immensely helpful.



  

Thursday, July 09, 2015

The Reddit Shutdown: Model Cyber Protest or Temper Tantrum?

Last Friday approximately 300 discussion forums on Reddit were shut down by their moderators in a show of protest against the firing of Reddit employee, Victoria Taylor, the company's director of talent.  Some of the details of this story remain shrouded in mystery - most notably, the reasons for why Ms. Taylor was fired, as well as why moderators thought she was so valuable in the first place.  The stated reasons don't seem overly compelling:  that she "coordinated high-profile forums...  would walk participants through the basics of using Reddit, create verified accounts for them to use, and help them introduce themselves to the community".

As for the more specific reasons behind the protest, the volunteer moderators first posted a document online that asked for better communication with official staff, as well as improved software tools for community management.  Then, yesterday, two of these moderators published an op-ed in the New York Times explaining "Why We Shut Down Reddit's 'Ask Me Anything' Forum".  In it, they describe their "anger at the way the company routinely demands that the volunteers and community accept major changes that reduce our efficiency and increase our workload", "a long pattern of insisting the community and the moderators do more with less", and their desire "to communicate to the relatively tone-deaf company leaders that the pattern of removing tools and failing to improve available tools to the community at large, not merely the moderators, was an affront to the people who use the site".

Reddit's CEO, Ellen Pao, apologized for not informing the community.  Meanwhile, all of the subreddit forums are back online.

Should the rest of us care?  On the one hand, because of Reddit's 160 million regular monthly visitors, this is a cyber protest with high visibility and, arguably, impact.  The volunteer moderators expressed their voice effectively in communicating their discontent to their corporate overseers, and did so, publicly, through collective action.  As far as cyber protests go, that's fairly significant.

On the other hand, Ms. Taylor is still fired and, only a few days later, the subreddits are cruising along as if nothing ever happened.  The practical effect of the cyber protest has been simply to get an apology and to publicly complain about "having to do more with less".

Unless I'm missing something, that's not exactly the sign of the apocalypse.



  

Tuesday, July 07, 2015

The Emerging Bitcoin Governance Regime...

As someone long immersed in the study of Internet governance, I often find it striking how similar the discussions and activities are surrounding another supposedly "ungovernable" phenomenon...  Bitcoins and alternative cryptocurrencies.

The Bitcoin system, like the Internet, has a highly decentralized architecture, and this is by design.  But also similar to the Internet, being decentralized is not the same thing as being in a state of anarchy.  Certain clearly identifiable stakeholders have influence in shaping Bitcoin's usage and development, and others even have a demonstrable authority to constrain or enable behavior with intentional effects.

The open source model has a long-established tradition of decisions being made by "rough consensus".  With Bitcoins, there are three different types of consensus that are all necessary - consensus about rules, consensus about history, and consensus that coins have value.  Because the blockchain at the heart of Bitcoin is based so strongly on distributed copies of transaction histories, any new changes to the system must acquire a rough consensus among the Bitcoin community in order for the new changes to be adopted by others and interoperable with the rest of the currency system.

The rough consensus model, then, is the primary way in which decisions get made regarding the technology.  Policies are built into the code itself.

However, human beings play a large role as well.  Bitcoin Core is software licensed under the MIT open source license, and is the de-facto rulebook of Bitcoin.  So the question for Political Scientists is:  Who exactly is writing the rulebook?

Officially, anyone can contribute new rules, or ideas for technical improvements, via "pull requests" to Bitcoin Core.  Anyone can formally submit a new Bitcoin Improvement Proposal (BIP) and advocate for their proposal to be adopted, which occurs when it gets published in the numbered Bitcoin Improvement Proposal series.

In reality, there are a small handful of individuals who have far more policymaking authority than others.  There are currently five developers who maintain Bitcoin Core:  Gavin Andresen, Jeff Garzik, Gregory Maxwell, Wladimir J. van der Laan, and Pieter Wuille.  These are the people who "hold the pen" of the Bitcoin rulebook.  Any rule changes that they make to the code will get shipped in Bitcoin Core and will be followed by default.

Beyond the Core developers, formal institutions have begun to play a larger role in Bitcoin governance as well.  The Bitcoin Foundation is a nonprofit founded in 2012 whose main roles are 1) to help fund the Core developers out of the Foundation's assets, and 2) to act as "the voice of Bitcoin" while engaged in lobbying national governments around the world who increasingly seek to regulate Bitcoin activity.  Some of the Bitcoin Foundation's board members have been involved with criminal and/or financial troubles, and it remains an open question to what extent the Bitcoin Foundation actually represents the Bitcoin community at-large.

All of which serves to illustrate just how much governance has already emerged in this supposedly "ungovernable" space.  Just as how the Internet's protocols, or "rules", are governed by the rough consensus model led by institutions like ISOC and the W3C, Bitcoin also has a clearly identifiable governance regime which makes decisions based on the rough consensus model, whose rulebook is the Bitcoin Core, and its rules are written by its five Core developers.  And although the role of formal institutions like the Bitcoin Foundation is still unclear, they are quickly becoming recognized as an integral part of the governance equation going forward.

The bottom line is that, even in decentralized systems, rules are needed just to ensure basic functionality.  And where there are rules, there are rule-makers. 

Meet the new boss, same as the old boss.


  

Monday, May 18, 2015

Co-Design as a Driver of Pedagogical Innovation in Autism Education...

New technologies are often a catalyst for improving accessibility tools for individuals with disabilities.  However, the reverse is sometimes true as well.  In what ways and under what circumstances do disability perspectives act as a catalyst for technological innovation and a reformulation of design processes?  Rather than approaching disability as a problem to be solved, to what extent can an embrace of disability perspectives on accessibility designs lead to new generative outcomes?

In a new paper submitted to the Journal of Interactive Technology and Pedagogy (JITP), my co-author and I examine the pedagogical effects that mobile and cloud computing technologies have wrought in autism education – specifically, their transformational roles in enhancing portability, training, data collection and analysis, and synchronization.

A central component in autism education today is Applied Behavioral Analysis, or ABA.  While proven to have yielded positive outcomes, it remains unavailable to many impacted individuals. Factors contributing to this include an extensive time required to (a) design individualized teaching procedures, (b) train caregivers and therapists to carry out procedures consistently, and (c) collect and analyze data on a frequent basis to guide decision-making.

Mobile apps and cloud computing technology have emerged as crucial tools in the field over the past decade.  They have increased the portability of individualized materials and curriculum, the automation of data collection (resulting in more available time for analysis of said data), and the synchronization of relevant documentation amongst caregivers and treatment providers, who may be required to make decisions about treatment quickly under certain circumstances.

For example, we explore one case involving the evacuation of a family home in the middle of the night as flood waters neared. The family of six, including a 16-year old boy severely impacted by autism, was brought to the township’s high school gymnasium, serving as a shelter for the neighboring community, for over two days. The child’s mother brought his iPad (much to the disbelief of the firefighters who performed the rescue) and was able to create a narrative story using actual pictures to prime him for the unexpected situation.

We argue that there is a further, enormous, and as yet largely unrealized potential gain to be had in terms of large-scale data analysis through an incorporation of Data Science techniques, and the emerging field of “Big Data”.  While the aforementioned software tools assist the educator in data collection and instantaneous metric calculations based on performance, these are still, by design, highly individual-centric.  However, from an institutional perspective, or for the purposes of large-scale research studies, additional insights could be gained by deploying Big Data algorithmic approaches to the analysis of massive data sets of varied and complex structures in order to reveal hidden patterns and previously unknown correlations than those that could have been meaningfully analyzed even a few years ago.

Additionally, as integral as mobile apps and cloud computing have been in leading pedagogical innovation in autism education, the notion of co-design – specifically, the ability for the “users” of the software to play a large role in actually designing the software – is immensely significant.  The educator-practitioners implementing ABA techniques exemplify the blurred line that has emerged between researcher and designer by enabling the customization of software on-the-fly to suit specialized needs or unexpected circumstances, and this holds true for both less-credentialed and more-credentialed practitioners in addition to parents, researchers, and other stakeholders as well.  Software that facilitates easy incorporation of co-design principles, allowing broader participation in the design process determining what exactly the software will do and how it will do it, we argue, will continue to be the preferred tool of choice in the field.

Ultimately, the trend of ABA practitioners incorporating mobile and cloud computing technology into their pedagogies for autism education is highly likely to continue.  It will also inevitably evolve, and that is where co-design and its multistakeholder, collaboratory approach is ripe to play a leading role in guiding future technological and pedagogical innovation.

  

Monday, April 06, 2015

The Political Economy of 3D Printing...

3D printing has transformative potential.  It will shift manufacturing away from assembly lines, shift economic control away from those with immense amounts of capital who own those assembly lines, and will instead empower a manufacturing-to-the-masses movement.  Surely, these developments are certain to disrupt, if not destroy, existing economic and political institutions, right?  At least, such are the claims that technologists have been raving about when it comes to 3D printing for several years now.

Although technology pundits have been engaged in this vision-forecasting, there remains a remarkable lack of scholarship addressing the potential consequences of 3D printing that might begin to match that of the passionate 3D printing enthusiasts.  A quick search on Google Scholar reveals that almost all the academic literature focuses on its technical specifications, and almost none on the sociopolitical implications.

The one area showing some early promise is that of Political Economy.  In New Orleans a few weeks ago, Greek scholars Pierrakakis, Gkritzali, Kandias, and Gritzalis submitted a paper to the International Studies Association Annual Conference titled 3D Printing: A Paradigm Shift in Political Economy.

Three "impact areas" are identified.  First, in terms of production and manufacturing, they argue that 3D printing will foster the trend toward "increasing product design freedom" and that on-demand production has "the capacity to drive a change in tastes".  At the very least, it's not unreasonable to suggest that a shift away from mass manufacturing and towards mass customization will likely have serious consequences on current manufacturing states like China.  Ultimately, it will be designs, not products, that move around the world.  And the geopolitics will have to adjust.

Second, in terms of work, tradeable goods will be transformed into commodities. The example given is that there would be no need for someone with a 3D printer to buy plates, cups, or other everyday objects, and thus several areas of traditional manufacturing will struggle to survive, causing major disruptions in international political economy.  New industries and professions will displace older ones - there will be an increased need for the raw materials that will be required for 3D printing's additive manufacturing processes, increased demand for product engineering and product design and their corresponding skillsets, and an increased need for legal services that specialize in the escalating emphasis toward intellectual property rights.  The authors even go so far as to raise the possibility of "a collapse of the traditional labor paradigm" as well as "the emergence of new classes, like the precariat".

Third, in terms of national security, they discuss the ability for individuals to print their own guns, although this isn't really a prediction - it's already been happening.  More noteworthy is their claim that, in the long run, one could predict entirely new classes of weapons developed with 3D printing techniques.  They also raise the implications of this for the mission of the U.S. military - namely, that if there is a decline in the mass production of goods on assembly lines, this may lead to a decline in global shipping of finished goods.  They argue that "this could reduce the magnitude of the challenge of protecting sea lanes with naval forces" and that lower demand for natural resources may even "reduce the likelihood of resource conflict".

This is a good start to a scholarly discourse that is in dire need of taking place, even if a few of the authors' claims seem pretty far-fetched.  What I might suggest is an examination into the intellectual property regime that is already so vital to 3D printing.  Already, if one daydreams with entrepreneurial ideas like creating a 3D printed design marketplace, they may be surprised to discover that not only does a burgeoning industry already exist, but that much of it, like Thingiverse.com, is actually driven by Creative Commons licensing.  Many product designs are given away for free by communities engaging in non-market social production.

To what extent this dynamic is merely an extension of non-market social production forces in other venues, or to what extent it is truly transformative in terms of political economy, is definitely worth exploring further.



  

Thursday, April 02, 2015

Anonymous Threatens Israel with "Electronic Holocaust" One Week Before Holocaust Remembrance Day...

The hacker group known as Anonymous is planning an "electronic holocaust" against Israel on April 7th. Their stated goal is to "take down [Israeli] servers, government websites, Israeli military websites, and Israeli institutions" and to send "a message to the youth of Palestine: you are a symbol of freedom, resistance and hope".

The planned electronic holocaust is to take place one week before the actual Holocaust Remembrance Day, known as Yom HaShoah, on April 15-16th.

Without even taking a side in the politics of the Israeli-Palestinian conflict, let us list the many reasons why this is a disgrace.

For starters, the term "holocaust" is of such great import and serious meaning that it's not something to be thrown around mindlessly.  The very phrase "electronic holocaust" is a display of ignorance.  A "holocaust" refers to the murder of an entire class of people.  Simply by equating murder with "taking down a server" ought to be pretty instructive about all anyone needs to know about Anonymous.  And their very use of the phrase "electronic holocaust" degrades the seriousness of what a holocaust even refers to.  Whether you support the Israeli or the Palestinian cause, I'm pretty sure every rational person would agree that taking down a server is not in the same category as genocide, and it's completely offensive to suggest otherwise.

Also, Anonymous is repeating the strategic problem it faces in almost every action it takes.  I've criticized the group in this blog before for how it seeks to fight censorship through... censorship.  It fights to promote free speech by... denying others the ability to speak.  They claim to be in the service of protecting civil liberties, yet over and over again their strategy has been to intimidate, censor, defame, and outright launch attacks against individuals and institutions who don't agree with their worldview.  Their hypocrisy lies in their tactics being diametrically opposed to the values of civil society which they claim to support.

So here's an idea.  On April 7th, when Anonymous pushes their "holocaust", people ought to fight back by posting all over social media that "Taking down a server is not the same thing as genocide #FightAnonymous".  Because, by the way, collective action in support of civil liberties is best accomplished through the actual exercise of those civil liberties, and not through their destruction.




  

Tuesday, March 10, 2015

Follow-up on the New Blogger Policy Regarding Sexually Explicit Content...

A few days after the previous post criticizing Blogger was written, lo and behold they have decided to alter course and revert back to the old policy....

This week, we announced a change to Blogger’s porn policy stating that blogs that distributed sexually explicit images or graphic nudity would be made private.

 

We’ve received lots of feedback about making a policy change that impacts longstanding blogs, and about the negative impact this could have on individuals who post sexually explicit content to express their identities.

 

We appreciate the feedback. Instead of making this change, we will be maintaining our existing policies.

 

Their responsiveness should be applauded.
  

Thursday, February 26, 2015

Blogger Will No Longer Allow Sexually Explicit Content. Here's Why This Is So Problematic...

Blogger - Google's blogging web service - just posted a message to all its users that on March 23rd it will "no longer allow certain sexually explicit content".  When you read the details, they state that if a blog does have sexually explicit material, then on March 23rd the entire blog will be made private, only visible to individuals who have accepted an invitation from the administrator.  It further states that, "We'll still allow nudity if the content offers a substantial public benefit".

There are several reasons why this is so problematic.

First of all, the new Adult content policy is unbelievably vague.  What is their definition of "sexually explicit"?  There was one case where Facebook deemed informative content about breast cancer whose purpose was to encourage mammograms as "sexually explicit".  There was another famous case where they banned images of breast-feeding mothers.  Once they protested, that group ultimately became known as "The Lacktivists", which may just be the greatest moniker for a group ever.  But I think you get the point.  There's not always a strong consensus over what is "sexually explicit", and websites have a history of frequently getting it completely wrong.

Second, in a similar vein, how will they determine what "offers a substantial public benefit"?  It's a definitional problem once again.  And it's worth pointing out that the new policy doesn't indicate whether it will be human beings or an algorithm making the final judgment.  Both methodologies have their flaws, so how comfortable should we be with either of them?

Third, the practical question for millions of Blogger users with a long posting history is:  How will I know ahead of time if my entire blog will suddenly be taken offline?  For example, The Nerfherder has been using Blogger since its inception in 2006.  This is clearly not a blog dealing in sexually explicit content, however occasionally this blog has reported on news events related to the regulation of such material.  For instance, we once wrote a post about the outing of a troll on Reddit named "Violentacrez".  In our post, we reported on how he had created forums titled "Jailbait" and "Rapebait", to name only a few.  PLEASE, read the post for yourself and decide whether, in any way, shape, or form, you believe this should be considered "sexually explicit content".  Should The Nerfherder now fear that this entire blog is about to be taken offline by Blogger because an algorithm might discover those phrases located in a post?  It would at least be helpful to know if it was going to be taken offline ahead of time.

Private companies certainly have the right to remove sexually explicit content from their service.  There's no problem there.  The problem is that Blogger should have 1) provided more detailed criteria for what would be deemed "sexually explicit", 2) offered additional criteria for what would be considered as having "substantial public benefit", 3) been transparent in whether this new policy was being implemented by algorithm or by human beings (in order to know who should be held accountable for egregious overreaches), and finally, 4) informed its users ahead of time if their blog was about to be taken offline so that they could take preemptive steps in order to avoid the takedown, as Blogger itself suggests.

Wordpress, anyone?


  

Monday, February 23, 2015

Internet Governance and the New HTTP2 Protocol...

Proof that the Internet is, in fact, governed can most easily be found in the adoption of its technical standards and protocols.  Think about it:  despite the Internet's decentralization, certain protocols have to be designed and adopted by nearly everyone just to ensure that the Internet remains interoperable and functional.  Not only does virtually everyone need to agree on these protocols, but clearly identifiable institutions have to make decisions, resolve conflicts, and maintain control over them.  This authority is the very definition of governance.

Which brings us to last week's big news that the HTTP2 protocol has officially been completed.  There is a single institution known as the Internet Engineering Task Force (IETF) - an international and non-profit organization - which is single-handedly responsible for making decisions over the Internet's standards and protocols.  HTTP2, as its name suggests, is the next evolutionary leap forward for the classic HTTP protocol which has been the Web's main standard for data communication since at least 1999, and a previous version since 1996.

So let's all celebrate!  After all, this is the Web's open democratic process in action, right?  Without the intervention of any national government, the Web has once again initiated an open participatory process, issued a Request for Comments (RFC), and ultimately built a rough consensus upon which it made a binding decision about its own future development.  Is this not the self-governance and autonomy that early Internet evangelists predicted?

Well...  There is one notable observation weakening the utopian self-governance argument.  HTTP2 is based on SPDY, which was invented by Google, and later supported by Apple, Microsoft, Amazon, Facebook, and others.  In fact, those companies pushed hard in order to get the IETF to formally adopt it.  Some may argue that corporate influence has decreased the level of democratization in the process, rendering the IETF as a mere agent of such corporations and institutionalizing their self-interested preferences.  However, others will correctly point out that such corporate involvement has been a part of the IETF's standards-setting processes from the beginning, so it's really nothing new, and may even be considered crucial to a new protocol's widespread adoption.

Regardless of the power relationships involved in this aspect of Internet governance, the question many of you will undoubtedly have relates to relevancy.  How will this momentous development of the HTTP2 protocol affect your life?  Mainly by speeding up your web browsing.  And there's certainly not going to be a grassroots movement protesting that.


  

Wednesday, February 18, 2015

Why Google's Research Study on Data Localization and Cybersecurity Shouldn't Be Taken Seriously...

Earlier this week, Google announced the release of a research study - conducted by Leviathan Systems, but commissioned by Google - which sought to compare the security of cloud-based versus localized systems.

Many countries around the world have recently proposed laws that would require companies to keep the data about that country's users within national borders.  For example, if a website in France was saving the personal data of French citizens, then the law would require the website to save that data somewhere within France's borders, as opposed to, say, California.  The logic is two-fold: first, information about a country's citizens would stay out of the hands of spying foreign governments and, second, it would better enable countries to design and implement their own privacy laws (to that point, privacy laws are much stronger in the European Union than in the United States).

Predictably, Google and many other high-tech firms have come out against such laws requiring data localization.  For them, it's an added expense.  Google would need to backup and store user data within each such country in which it operates, rather than using Silicon Valley as its central hub for everything.

Because of this opposition, one has to be somewhat skeptical of a research study paid for by Google concluding that data localization is so clearly negative.  Their argument is that cloud-based systems are more secure than localized ones, and that there would be a shortage of expertise within many countries to put stronger cybersecurity measures into effect.

It's not that there's no truth in that claim, it's just that we can be forgiven for being a little skeptical.  This has become the modus operandi within the tech industry: lobby elected representatives, lobby regulatory agencies within the Executive Branch, and pay for-profit think-tanks to conduct research studies which, often, lead to predetermined results favorable to its sponsor.

From a purely economic point of view, of course Google wants to avoid data localization requirements.  But there are non-economic arguments for why localization might be considered a positive - namely, the better protection of privacy rights.  Google can hardly be considered unbiased, and thus, this study's conclusions shouldn't be considered authoritative, by any stretch.


  

Tuesday, February 10, 2015

Creating a Constitution with Open Data...

Most national or state constitutions aren't written from scratch, but rather are derivative works based off of other national and state constitutions. For example, the constitution of Japan looks remarkably similar to that of the U.S. (largely because it was written in 1946 when the U.S. occupied Japan after World War II). In fact, on average, 5 new constitutions are written every year, and even more are amended.

Could modern data-driven technologies help in the constitution-drafting process? Furthermore, could any individual potentially create a constitution that would govern some type of entity using such tools as well? What would be the consequences of this?

Google Ideas launched a website called Constitute in 2013 which allows people to not only view and download every national constitution in the world, but also has a feature that enables easy comparisons between them. Furthermore, Constitute let you mashup different excerpts from different sources so that, in effect, you can embed your own constitutional ideas in a single document and share it on social media. Going yet another step further, Constitute also makes all of their underlying data freely available through an open data portal, complete with its own API for programmers and research developers.

It's an interesting exercise to think about what type of constitution would you create for governing Internet use in the United States. What ideas would it embody? What values and/or rights and liberties would it be designed to protect? This is not as hypothetical as you might imagine. Brazil actually passed such an Internet Constitution last year. How might an open data approach affect outcomes?

  

Wednesday, January 21, 2015

"The Permanent Professor": How the Long-Term Use of Social Media Transforms the Professor-Student Relationship

The presentation I recently gave at the American Political Science Association Teaching and Learning Conference...




Or...

http://prezi.com/_4kxi0lq4n15/?utm_campaign=share&utm_medium=copy


  

Saturday, January 03, 2015

What Do They Teach in a Hacking Class?

Non-Computer Science laymen always seem shocked to hear that undergraduate courses are offered in hacking.  Why?  It's really just a sexy way to market a course in cybersecurity.  Or so we tell everyone.  If you've ever been curious as to what they teach in a hacking class, here's a general outline (since I'm prepping for next semester anyway):

  • Penetration Testing

  • The instructor typically sets up a "hacking lab" where one machine or small network is set up with different types of security solutions installed.  The object for the semester will be for students to hack the instructor's machine and setup.  These days, security testing in the classroom is easily accomplished using Backtrack Linux and Kali Linux.

  • Reconnaissance

  • The idea is to gather as much information about a target as possible to increase your chances of success later.  This is done through a combination of Google directives, The Harvester Python script, the WhoIs database, NetCraft, Fierce, MetaGooFil, the ThreatAgent Drone, and other tools.  The goal by the end of the Reconnaissance stage is to have a list of IP addresses that belong to the target.

  • Scanning

  • Once we have a list of IP addresses, the next step is to map those addresses to open ports and services  Students need to determine if a system is alive with ping packets, port scan the system with Nmap and use the Nmap scripting engine (NSE) to gather further information about the target, and scan the system for vulnerabilities with Nessus.


  • Exploitation

  • This is the process of actually gaining control over a system.  Students explore online password cracking tools like Medusa and Hydra, as well as learn how to use tools like the full MetaSploit framework, Wireshark, Macof, and Armitage.  This is really the stage most people think of when they think of computer hacking, but the point to stress to students is that only by engaging in the first wo preliminary steps will you get the most out of Exploitation.


  • Social Engineering

  • Making your attack vectors believable.  After all, the best hacks are those which go undetected.  Use of the social-engineer toolkit (SET), website-attack vectors, credential harvesters, and more are explored.


  • Web-based Exploitation

  • For when websites themselves (not only local networks connected to the Internet) are the target.  This stage incudes intercepting requests as they leave the browser, discovering all files and directories that make up the target web application, and analyzing responses from the web application to find vulnerabilities.  Frameworks to use include W3af, the Burp Suite, the Zed Attack Proxy (ZAP), Websecurify, and Paros, and other role-specific tools.


  • Post-Exploitation: Maintaining Access

  • Using backdoors, rootkits and meterpreters that allow the attacker to return at will.  Tools include Netcat, Cryptcat, and really just a comprehensive explanation about how rootkits operate.


    Still find this interesting or did these details deflate your excitement about learning "how to hack"?  Remember, the real challenge for us non-criminal types is to prevent these tools and methods from working.  It is an arms race, and we're in it to win it.


  

Monday, December 08, 2014

Tweeting Alone: Slacktivism and the Decline of Civic Engagement...

Dave Karpf of Rutgers University wrote a clarifying piece recently entitled, "Slacktivism as Optical Illusion", in which he describes how online activities labeled (with a negative connotation) as slacktivism can either be a waste of time or may actually serve a larger purpose.  It depends on how the activity is carried out.

He makes three points for explaining how slacktivist activities can be meaningful:  First, they should strategically be used to attract mainstream media attention.  It's pointed out that, today, journalists and editors actually turn to social media in order to pick out potential stories worth covering.  Second, they should have a specific target in mind.  For example, a general online petition to "stop animal cruelty" is guaranteed to make no difference, whereas the type expressing displeasure with specific corporations has a history of leading to successful policy change.  And third, organizations should develop relationships with people who've engaged in simple acts of digital engagement over time in order to "ladder" them up to larger-scale activism.

Great points all, and it's refreshing to read something of a how-to guide for constructive slacktivism rather than just yet another venting of frustration about it.

Something else that may be added to the conversation is how slacktivism is related to the decline of civic engagement in America more generally.   Robert Putnam described in his classic book, Bowling Alone: The Collapse and Revival of American Community, how social structures, or community-building organizations, from bowling leagues to weekly poker games to church-going Sundays, have been experiencing a major decline in participation for decades.  This decline in community-related activities has led to a decline in civic engagement and political participation as well, as more individuals engage in solitary activities disconnected from others.

Online social networking has raised the question since its inception of whether it fosters the concepts of "networking", community-building, and civic engagement, or whether it works against it.  And slacktivism is a strong component of this question.  If you tweet expressing support for a cause, does that make you more likely or less likely to engage in different forms of activism on the cause's behalf in the future?

Karpf is on the right track.  More ideas need to be generated in order to make "more likely" the more frequent answer.


  

Tuesday, November 18, 2014

CyberWar: Anonymous vs. the Ku Klux Klan

Over the weekend, a cyberwar ensued between two highly controversial groups - Anonymous and the Ku Klux Klan.  As ZDNet reports, at issue was the upcoming grand jury verdict in the Michael Brown case in Ferguson, MO.  Here is the sequence of what went down...

A Klan group named the Traditionalist American Knights of the KKK distributed flyers last week threatening the use of "lethal force" against the protesters in Ferguson.  In response, members of the hacktivist group Anonymous "skirmished" with the KKK on Twitter, at which point, after being "mocked and threatened", Anonymous launched a full-blown cyberwar campaign called #OpKKK and ultimately seized control of the Klan's main Twitter account, @KuKluxKlanUSA.

Anonymous then issued this statement explaining how the Klan is a terrorist group with blood on their hands and, as a result, the Klan "no longer has the right to express their racist, bigoted opinions".

But the story's not finished.  The Klan responded by using their other primary Twitter account, @KLANonymous, to post this message...


Anonymous then quickly seized control of that account as well.

Meanwhile, Anonymous has also been launching Distributed Denial of Service (DDoS) attacks on much of the Klan's online presence.  They've shut down websites like IKKK.com and TraditionalistAmericanKnights.com as well as the Klan's largest discussion board, Stormfront.

Now Anonymous has turned its focus towards identifying Klan members with its #HoodsOff campaign.  They are doing this by looking at the Direct Messages sent over time to the Klan's seized Twitter accounts, although Anonymous explicitly acknowledges that they are still debating to what extent people's identities should be made public, considering that they "are not completely sure how much of a connection many of the people actually have to the KKK" and want to make sure they are outing the right people.

That about sums it up.  For now.

First of all, is it somewhat surprising to anyone else that officially recognized active hate groups and domestic terrorist organizations have non-secretive Twitter accounts?  Call me naive, but wouldn't a Twitter account called @AlQaeda or a website named "www.alqaeda.com" be shut down by homeland security or law enforcement officials immediately?  How does Twitter even allow something called @KuKluxKlanUSA to exist?  There's no technical reason which would make removal difficult; it's just a policy decision.

Second, let us also not forget that Anonymous is considered by many to be a criminal, even cyberterrorist, organization as well, having previously launched attacks against U.S. government agencies, police departments, and even launched anti-Israel cyberattacks on Holocaust Remembrance Day.  So before Anonymous is applauded too strongly for their efforts against the KKK, let's just keep in mind that they're not exactly heroes by any stretch of the imagination.

Third, it should be observed that Anonymous is getting better at what they do.  The speed at which they managed to seize control of the Klan's Twitter accounts and launch effective DDoS attacks that shut down numerous websites and discussion boards was impressive, even by their own standards.  It makes their calling card, "You should have expected us", even that much more frightening.

No one's going to have, nor should have, any sympathy for the Ku Klux Klan, and in that sense this is a story with a positive outcome.  With that said, in the larger scheme of things, it remains difficult for other hacktivists to sympathize with Anonymous either because their problem is that they pursue their stated goal of freedom basically through intimidation.  If you cross them, they will attack you.  This blog has been flamed by Anonymous before, and to be honest, it does indeed make one hesitate from writing about them further.  And that's the problem.  Anonymous creates a very real chilling effect on the very speech they claim to protect.



  

Wednesday, November 12, 2014

Big Data and Municipal Governments...

Data analytics, or "Big Data", is already widely used by businesses to find correlations that help to make predictions - predictions about consumer behavior, predictions about value-chains and supply-chains, etc.  By doing so, Big Data greatly improves organizational efficiency and forecasting, spotting trends as they emerge or even before they emerge.

So why not put Big Data to use in order to improve the workings of government?

In their book titled, "The Responsive City: Engaging Communities Through Data-Smart Governance", Stephen Goldsmith and Susan Crawford explore how municipal governments, in particular, can use Big Data effectively to radically transform how local governments serve its citizens.  As summarized by the Harvard Gazette:

A “responsive” city is one that doesn’t just make ordinary transactions like paying a parking ticket easier, but that uses the information generated by its interactions with residents to better understand and predict the needs of neighborhoods, to measure the effectiveness of city agencies and workers, to identify waste and fraud, to increase transparency, and, most importantly, to solve problems.

The requirements for municipal governments wanting to adopt a Big Data strategy include, first, building a high-speed fiber network, and second, that they should publish their collected data sets publicly and with full transparency. The idea, says Goldsmith, is to allow employees to see other agencies, allow residents to hold their city hall responsible, but also to provide data that can lead to breakthroughs and solutions from both inside and outside government.

Surely, this is, indeed, a potential boon for municipal governments.  However, the potential downside to governments relying on Big Data, it must be reiterated, is that Big Data has often been criticized for enabling discrimination on the basis of race, religion, gender, sexual orientation, etc.  Alistair Croll famously declared it this generation's Civil Rights issue.

In fact, a recent report by The Leadership Conference on Civil and Human Rights highlighted this danger of institutionalizing discrimination, and even endorsed a document titled, "Civil Rights Principles for the Era of Big Data".  However, the group's recommendations include such lofty goals as "an end to high-tech profiling" and "greater individual control over personal information", both of which seem unlikely.  And by "unlikely", we mean there's no chance it's ever going to happen.

The take here is that the era of Big Data for governments is coming, like it or not.


  

Thursday, October 30, 2014

The Value of Online Confessionals: Evaluating the Secret & Whisper Apps...

As addictive as Facebook has become for some people as a means of feeling validated or popular - writing posts specifically to garner "likes", and experiencing disappointment when there's not a large response - there remains a hesitation by most Facebook users to post brutally honest thoughts or confessions for fear of backlash amongst those they know, not to mention that what they post may be archived and associated with themselves forever.

Two apps address this dilemma of public confessions:  Secret and Whisper.  Secret enables you to write posts anonymously and links to your Facebook account so that only your friends can see it, even though your friends won't know it was specifically you who posted.  Meanwhile, Whisper lets you do the same thing, but the anonymous posts are visible to the general online public. 

The allure of both services is to be able to write posts without personally identifiable consequences and also, as a reader of others' posts, it is tantalizing to read brutally honest and revealing confessionals written by people you actually know in your social network.

The fact that these apps are being so widely applauded is more a sign of great P.R. departments than anything else.  Rachel Metz writes for the MIT Technology Review that people do indeed say some nasty things on these anonymous apps, but that the good far outweighs the bad.  And one can go as far back as to the founder of analytical psychology, Carl Jung, to read about the value of confession as a positive force.

However, while online confessionals may serve a positive psychological purpose, there are some inherent dangers related to the fact that they are online forums.  For instance, to what extent will even private confessions be archived considering that other "private" social apps like SnapChat have recently been hacked and users' private content was then made publicly available?  What other privacy concerns should individuals consider before posting intimate details about themselves to the Internet (because, ultimately, that's what they're still doing)?  What restrictions should there be on children or teenagers both writing posts and reading/commenting on others'?

Secret and Whisper can have a positive value, and they're certainly addictive to read because you're just dying to know who could have written such a thing.  But as far as using them to write your own confessional posts... maybe a healthy dose of skepticism ought to be in order.



  

Wednesday, October 01, 2014

Using Proxy Servers to Help the Hong Kong Protesters...

The Chinese government is cracking down on the pro-democracy demonstrations in Hong Kong using tear gas and other heavy-handed methods, and have also begun censoring Internet content and online social media.  Hong Kong, being a semi-autonomous region, typically experiences less of the Great Firewall than does most of China proper, however due to fears of the demonstrations spreading further, Instagram, YouTube, Twitter, Facebook, numerous blogs and wikis, search engine results, and more are all being blocked for residents of the island to varying degrees.

As reported by CNN, users cannot view images on Instagram and are instead directed to a message that reads, "Can't refresh feed".  Meanwhile...

Searches on China's top search engine sites such as Baidu and Sogou for the terms "Hong Kong protest" or even "Hong Kong students" yielded irrelevant results such as stories showing a a blissful image of Hong Kong residents picnicking on the grass or how Hong Kong is welcoming tourists from the mainland during the national holiday week.

When relevant results appeared on the Chinese search engines, the articles contained a distinctively pro-China slant and even surfaced a month-old article about a small pro-Beijing counter-protest in Hong Kong.


This can hardly be considered a surprising development, and if there is a positive consequence of the Chinese government's pattern of censorship over time it is that there is an entire infrastructure already in place to help users circumvent the Great Firewall and access the sites that are being censored.

Basically, protesters and residents of Hong Kong need to use a proxy server.  Proxy servers will tunnel users' Internet traffic through to their destination sites, while masking that destination to the filters.  Users can find available proxy servers pretty easily on constantly updated public lists.

Meanwhile, for anyone observing the events in Hong Kong from afar who would like to help, setting up a proxy server for others to use is fairly simple and free.  As with many hacktivist tools these days, no programming expertise is required.