Tuesday, March 25, 2014

Timelessness vs. Timeliness: The Debate Among Scholar-Bloggers

To what extent should academics be active in social media? Also, to what extent should their social media presence and the content they share be considered towards career advancement and tenure? The bottom line: Is blogging legitimate political science?

These aren't exactly new questions, but most scholars who are active in cyberspace usually stick to writing data- or theory-driven posts, basically replicating the same style of wonkish writing found in academic journals. There remains a widespread fear, or at least strong hesitation, of writing subjective, opinion-based posts lest their "amateurism" be used against them professionally. Thus, this "shut-the-blinds and delve-into-the-data posture" remains the norm, where timelessness rather than timeliness is valued.

Mira Sucharov and Brent E. Sasley address this dilemma in the most recent issue of PS: Political Science and Politics (47,1). In their article, "Blogging Identities on Israel/Palestine: Public Intellectuals and Their Audiences", they argue very much in favor of scholar-bloggers writing subjectively and make the case for why it should be considered "an asset to be embraced rather than a hazard to be avoided".

They make three points. First, that the kinds of subjectivity and personal attachments that guide one's endeavors will actually lead to more deeply resonating critiques, thus enhancing scholarship and teaching; Second, that through the melding of scholarly arguments with popular writing forms, scholar-bloggers can become leaders of the discourse on important issues through public engagement and political literacy; And third, that despite the "subjectivity hazard", being aware of one's social media audience can help maximize scholars' potential to serve the public interest in all its manifestations.

While these are agreeable points, doesn't it raise the idea of "activist scholars"? And doesn't that notion make us instinctively recoil and pose an uncomfortable challenge to our conceptions of what a scholar is supposed to be, particularly in their roles as teachers?

Robert Farley has also argued another important counterpoint: While there is a growing acceptance of blogging as legitimate political science, and that the discipline should even provide incentives for faculty members who blog, he warns that trying to bring blogging too much into the fold of the discipline's existing structures "runs the risk of imposing rigid conditions and qualifications on bloggers that undermine the very benefits inherent in the nature of blogging".

What this question ultimately boils down to is credibility. Blogging and other forms of social media can be used to either enhance a scholar's credibility or to damage it. Thus, there is no single "correct" answer to the question of whether or not social media has intrinsic scholarly value. The question isn't a binary one, but rather is dependent on each individual's use of the medium.

  

Tuesday, March 18, 2014

Big Data as a Civil Rights Issue...

In classes on Information Systems, we talk about the rising use of "Big Data" - enormous collections of data sets that are difficult to process using traditional database management tools or data processing applications, and which are increasingly used to find correlations that, for instance, spot business trends, personalize advertisements for individual Web users, combat crime, or determine real-time roadway traffic conditions.

But is "personalization" just a guise for discrimination?

That's the argument put forth in Alistair Croll's 2012 instant-classic post titled, "Big data is our generation's civil rights issue, and we don't know it". He goes on to argue that, although corporations market the practice of digital personalization as "better service", in practice this personalization allows for discrimination based on race, religion, gender, sexual orientation, and more.

The way this works is that, by mining Big Data, a list of "trigger words" emerges that help identify people's race, gender, religion, sexual orientation, etc. From a marketing company's point of view, they then "personalize" their marketing efforts towards someone based on such characteristics. And that makes it a civil rights issue.

For example, American Express uses customer purchase histories to adjust credit limits based on where a customer shops - and as a result there have been cases reported of individuals having their credit limits lowered because they live and shop in less-affluent neighborhoods, despite having excellent credit histories.

In another example, Chicago uses Big Data to create its "heat map". According to TechPresident, the heat map is "a list of more than 400 Chicago residents identified, through computer analysis, as being most likely to be involved in a shooting. The algorithm used by the police department, in an initiative funded by the National Institute of Justice, takes criminal offenses into account, as well as known acquaintances and their arrest histories. A 17-year-old girl made the list, as well as Robert McDaniel, a 22-year-old with only one misdemeanor conviction on his record."

In yet another example, a Wall Street Journal investigation in 2012 revealed that Staples displays different product prices to online consumers based on their location. Consumers living near another major office supply store like OfficeMax or Office Depot would usually see a lower price than those not near a direct competitor...

 

One consequence of this practice is that areas that saw the discounted price generally had a higher average income than in the areas that saw the higher prices...

Price discrimination (what economists call differential pricing) is only illegal when based on race, sex, national origin or religion. Price discrimination based on ownership — for example, Orbitz showing more expensive hotel options to Mac users—or on place of residence, as in the Staples example, is technically okay in the eyes of the law...

However, when you consider that black Americans with incomes of more than $75,000 usually live in poorer areas than white Americans with incomes of only $40,000 a year, it is hard not to find Staples' price discrimination, well, discriminatory.

 

And in an especially frightening read earlier this month, The Atlantic published an article outlining how companies are using Big Data not only to exploit consumers, but also to exclude and alienate especially "undesirable" consumers.

The idea behind civil rights is that we should all be considered on an individual basis.  People should not be treated differently solely due to their race, religion, gender, or sexual orientation.  The Civil Rights Act of 1964 explicitly banned such differential treatment in the private sector.  That is why there are no longer separate drinking fountains on the basis of race.

So as Big Data permeates society, and as algorithms and various modelling techniques try to find patterns that seek to predict individual behavior, if those algorithms are indeed "personalizing" content on the basis of race, religion, gender, or sexual orientation, then how is it NOT discriminatory?

Just because it's the result of an algorithm doesn't make it OK.  Algorithms are programmed by people, after all.


  

Why Good Hackers Make Good Citizens...

By request, here is a TED Talks video on why hackers make good citizens, presented by Catherine Bracy from Code for America.






  

Thursday, March 13, 2014

What Would an Internet Bill of Rights Look Like?

To little fanfare, yesterday marked the 25th birthday for the Internet's most successful "killer app" - the World Wide Web.  Its creator, Tim Berners-Lee, marked the day by releasing a statement and arguing for the urgent need to create an Internet Bill of Rights.

What would such an Internet Bill of Rights look like?  Berners-Lee believes it should be focused on the Web's original founding constitutional principles of open access and open architecture and, additionally, the protection of privacy rights.

These principles may seem on the surface to be apple-pie statements - meaning that nobody really opposes them in their simply-stated form.  However, very serious political debates have arisen demonstrating just how much the devil is in the details.  For instance, open access sounds great, but how does it play out in the F.C.C.'s rulings on Net Neutrality?  Likewise, everyone will publicly support the notion of individual privacy rights but, in actual practice, determining to what extent government regulations are desirable in order to set the rules for what type of data gets stored, and by whom, is certainly a bit more controversial.

The idea of an Internet Bill of Rights is not new, and should one emerge it will likely be more of an expression of constitutional principles (that's constitution with a lowercase "c"), and not a document with any sort of legal bearing.  That said, it can still be immensely valuable and important.

In typical "open" fashion, Berners-Lee is encouraging any and all Web users to head over to the Web We Want campaign and submit their own proposals.  So, armchair-pundits, here's your chance to help draft the legislation that you want to see.  It's a massive crowdsourced effort, like the Web itself.



  

Thursday, March 06, 2014

The Problem with Facebook and Gun Sales...

Here's a case where we can see the "code is law" principle play out right before our eyes.  After coming under scrutiny in recent weeks by a variety of pro-gun-control advocacy groups, Facebook decided yesterday to voluntarily place new restrictions on the selling of guns through its website.

To understand the scrutiny, consider that last week VentureBeat reported that it arranged to buy a gun illegally on Facebook in 15 minutes.  Also, the Wall Street Journal reported that both assault-weapons parts and concealed-carry weapon holsters have been advertised to teenagers on the site.  Additionally, Facebook "community" pages such as one called Guns for Sale with over 213,000 "likes", have been freely available to minors of all ages as well.

Specifically, Facebook has announced that they will begin to...

  1. Remove offers to sell guns without background checks or across state lines illegally.
  2. Restrict minors from viewing pages that sell guns.
  3. Delete any posts that seek to circumvent gun laws.
  4. Inform potential sellers that private sales could be regulated or prohibited where they live.
All of which seems well and good.  Even gun rights advocates shouldn't have too much of a problem with these measures considering that their intent is not ban gun sales on Facebook but rather to better enforce existing laws (which is an argument they commonly make themselves).

But here's the rub.  There's the little detail in the Facebook press release about how the company will rely on users to report posts and pages offering to sell guns.  

So let's be clear.  With the announcement of these measures, Facebook is pursuing a policy of reacting to illegal gun sales on its site, but will not be proactive in preventing them.

The reason has to do with, what The Nerfherder has previously dubbed, The Politics of the Algorithm.  Any advertisements Facebook displays on an individual's feed is not decided upon by human decision-makers, but by a mathematical algorithm.  As a result, a 15 year old from Kentucky might be shown an advertisement selling guns from someone in Ohio based on whether or not the algorithm determines he might be interested in it - regardless of the fact that it is illegal according to federal law to 1) sell guns to a minor, and 2) sell guns across state lines without a dealer license.

This actually happened last month.  The 15-year-old was later caught with the loaded handgun at his high school football game, and the seller has since been charged.

Facebook wants to address such safety concerns and, of course, limit its legal liability.  And (not to pick on them too harshly) these measures are at least a step in the right direction.  The problem is that it's practically impossible to truly regulate online content in accordance with the law when humans have been removed from the equation.  Such concerns are an inevitable consequence of social media's dependence upon algorithms - all of which, as this case illustrates, are both flawed and modifiable.