Tuesday, April 29, 2014

Should Anonymous Reviews Online Be Banned?

Small businesses today are discovered and marketed very differently than they were a generation ago.  Reputation - especially online reputation - can make or break a budding enterprise.

The problem is what happens when people express extremely harsh critiques of your business in a public forum, and do so anonymously?  Are such anonymous reviews a protected form of free speech?  Or, because their authenticity cannot be ascertained, do businesses have a right to "unmask" the website's users - especially in cases of defamation?

The Virginia Supreme Court is about to answer these questions.  A case has arisen where a business named Hadeed Carpet Cleaning, Inc. filed a lawsuit against seven individual Yelp users claiming defamation, and demanded that Yelp turn over their true identities.  According to the Wall Street Journal, "So far, both the Alexandria Circuit Court and the Virginia Court of Appeals have sided with Hadeed, holding Yelp in contempt for not turning over the names.  Yelp in January appealed to the state Supreme Court, arguing that the reviews are protected under the First Amendment and that Mr. Hadeed offered scant evidence that they were fakes".

There are two real issues here.  First, how important is anonymity in posting reviews?  Second, what are a website's responsibilities as a third-party facilitator of the forum?

Anonymous speech is a monstrously large topic with an established legal tradition that goes back to America's founding.  Let's just say that it has been recognized in the American political tradition as being both valuable and vital to the spirit of the First Amendment.

That's legally-speaking.  However, in reality, online anonymity is regulated or outright banned more often than most people realize.  Whether it's your ISP or network administrator banning the masking of your IP address, or Facebook prohibiting anonymous accounts that don't clearly identify you as the person you are in real-life (remember when MySpace was rampant with such anonymous accounts?), the fact is that more and more online forums not only aren't valuing user-anonymity very much, they're outright viewing such anonymity as negative.

As for the website's responsibilities, it seems pretty clear that Yelp has little to worry about thanks to the most underrated federal policy of our time - Section 230 of the Communications Decency Act of 1996.  This Act provides immunity from liability for websites based on what its users publish.  In other words, Yelp cannot be held liable for a scathing review left by some individual anymore than Facebook can for a slanderous status message or Twitter can for a personally embarrassing tweet.  Web 2.0 sites based on user-generated content are shielded from such liability by Section 230.

Ironically, despite businesses like Hadeed increasingly objecting to Section 230 protections, the Act was originally devised as a boon to help support businesses and nascent industries.

Here's some food for thought.  All these same issues arise in an individual context, just as they do in a business context.  In other words, for years, people have complained about how helpless they are in the face of critical or embarrassing material being posted about them online, and how there was little recourse available to them.  Businesses are increasingly in that same boat.  Stinks, doesn't it?  But that's the trade-off with protecting privacy and anonymity, for better or worse.

The best advice going forward for businesses like Hadeed is the same as that for individuals...  Don't try to exert outright control over your online presence; it's futile, and the law may not even support you in your quest.  Instead, take steps to manage your resulting online reputation.  For example, one prudent way for Hadeed to realistically fight negative reviews would be to create incentives for its customers to go on Yelp and flood its listing with positive reviews.  No law-breaking; no subversion; just being more proactive in the marketplace of speech.
  

Tuesday, March 25, 2014

Timelessness vs. Timeliness: The Debate Among Scholar-Bloggers

To what extent should academics be active in social media? Also, to what extent should their social media presence and the content they share be considered towards career advancement and tenure? The bottom line: Is blogging legitimate political science?

These aren't exactly new questions, but most scholars who are active in cyberspace usually stick to writing data- or theory-driven posts, basically replicating the same style of wonkish writing found in academic journals. There remains a widespread fear, or at least strong hesitation, of writing subjective, opinion-based posts lest their "amateurism" be used against them professionally. Thus, this "shut-the-blinds and delve-into-the-data posture" remains the norm, where timelessness rather than timeliness is valued.

Mira Sucharov and Brent E. Sasley address this dilemma in the most recent issue of PS: Political Science and Politics (47,1). In their article, "Blogging Identities on Israel/Palestine: Public Intellectuals and Their Audiences", they argue very much in favor of scholar-bloggers writing subjectively and make the case for why it should be considered "an asset to be embraced rather than a hazard to be avoided".

They make three points. First, that the kinds of subjectivity and personal attachments that guide one's endeavors will actually lead to more deeply resonating critiques, thus enhancing scholarship and teaching; Second, that through the melding of scholarly arguments with popular writing forms, scholar-bloggers can become leaders of the discourse on important issues through public engagement and political literacy; And third, that despite the "subjectivity hazard", being aware of one's social media audience can help maximize scholars' potential to serve the public interest in all its manifestations.

While these are agreeable points, doesn't it raise the idea of "activist scholars"? And doesn't that notion make us instinctively recoil and pose an uncomfortable challenge to our conceptions of what a scholar is supposed to be, particularly in their roles as teachers?

Robert Farley has also argued another important counterpoint: While there is a growing acceptance of blogging as legitimate political science, and that the discipline should even provide incentives for faculty members who blog, he warns that trying to bring blogging too much into the fold of the discipline's existing structures "runs the risk of imposing rigid conditions and qualifications on bloggers that undermine the very benefits inherent in the nature of blogging".

What this question ultimately boils down to is credibility. Blogging and other forms of social media can be used to either enhance a scholar's credibility or to damage it. Thus, there is no single "correct" answer to the question of whether or not social media has intrinsic scholarly value. The question isn't a binary one, but rather is dependent on each individual's use of the medium.

  

Tuesday, March 18, 2014

Big Data as a Civil Rights Issue...

In classes on Information Systems, we talk about the rising use of "Big Data" - enormous collections of data sets that are difficult to process using traditional database management tools or data processing applications, and which are increasingly used to find correlations that, for instance, spot business trends, personalize advertisements for individual Web users, combat crime, or determine real-time roadway traffic conditions.

But is "personalization" just a guise for discrimination?

That's the argument put forth in Alistair Croll's 2012 instant-classic post titled, "Big data is our generation's civil rights issue, and we don't know it". He goes on to argue that, although corporations market the practice of digital personalization as "better service", in practice this personalization allows for discrimination based on race, religion, gender, sexual orientation, and more.

The way this works is that, by mining Big Data, a list of "trigger words" emerges that help identify people's race, gender, religion, sexual orientation, etc. From a marketing company's point of view, they then "personalize" their marketing efforts towards someone based on such characteristics. And that makes it a civil rights issue.

For example, American Express uses customer purchase histories to adjust credit limits based on where a customer shops - and as a result there have been cases reported of individuals having their credit limits lowered because they live and shop in less-affluent neighborhoods, despite having excellent credit histories.

In another example, Chicago uses Big Data to create its "heat map". According to TechPresident, the heat map is "a list of more than 400 Chicago residents identified, through computer analysis, as being most likely to be involved in a shooting. The algorithm used by the police department, in an initiative funded by the National Institute of Justice, takes criminal offenses into account, as well as known acquaintances and their arrest histories. A 17-year-old girl made the list, as well as Robert McDaniel, a 22-year-old with only one misdemeanor conviction on his record."

In yet another example, a Wall Street Journal investigation in 2012 revealed that Staples displays different product prices to online consumers based on their location. Consumers living near another major office supply store like OfficeMax or Office Depot would usually see a lower price than those not near a direct competitor...

 

One consequence of this practice is that areas that saw the discounted price generally had a higher average income than in the areas that saw the higher prices...

Price discrimination (what economists call differential pricing) is only illegal when based on race, sex, national origin or religion. Price discrimination based on ownership — for example, Orbitz showing more expensive hotel options to Mac users—or on place of residence, as in the Staples example, is technically okay in the eyes of the law...

However, when you consider that black Americans with incomes of more than $75,000 usually live in poorer areas than white Americans with incomes of only $40,000 a year, it is hard not to find Staples' price discrimination, well, discriminatory.

 

And in an especially frightening read earlier this month, The Atlantic published an article outlining how companies are using Big Data not only to exploit consumers, but also to exclude and alienate especially "undesirable" consumers.

The idea behind civil rights is that we should all be considered on an individual basis.  People should not be treated differently solely due to their race, religion, gender, or sexual orientation.  The Civil Rights Act of 1964 explicitly banned such differential treatment in the private sector.  That is why there are no longer separate drinking fountains on the basis of race.

So as Big Data permeates society, and as algorithms and various modelling techniques try to find patterns that seek to predict individual behavior, if those algorithms are indeed "personalizing" content on the basis of race, religion, gender, or sexual orientation, then how is it NOT discriminatory?

Just because it's the result of an algorithm doesn't make it OK.  Algorithms are programmed by people, after all.


  

Why Good Hackers Make Good Citizens...

By request, here is a TED Talks video on why hackers make good citizens, presented by Catherine Bracy from Code for America.






  

Thursday, March 13, 2014

What Would an Internet Bill of Rights Look Like?

To little fanfare, yesterday marked the 25th birthday for the Internet's most successful "killer app" - the World Wide Web.  Its creator, Tim Berners-Lee, marked the day by releasing a statement and arguing for the urgent need to create an Internet Bill of Rights.

What would such an Internet Bill of Rights look like?  Berners-Lee believes it should be focused on the Web's original founding constitutional principles of open access and open architecture and, additionally, the protection of privacy rights.

These principles may seem on the surface to be apple-pie statements - meaning that nobody really opposes them in their simply-stated form.  However, very serious political debates have arisen demonstrating just how much the devil is in the details.  For instance, open access sounds great, but how does it play out in the F.C.C.'s rulings on Net Neutrality?  Likewise, everyone will publicly support the notion of individual privacy rights but, in actual practice, determining to what extent government regulations are desirable in order to set the rules for what type of data gets stored, and by whom, is certainly a bit more controversial.

The idea of an Internet Bill of Rights is not new, and should one emerge it will likely be more of an expression of constitutional principles (that's constitution with a lowercase "c"), and not a document with any sort of legal bearing.  That said, it can still be immensely valuable and important.

In typical "open" fashion, Berners-Lee is encouraging any and all Web users to head over to the Web We Want campaign and submit their own proposals.  So, armchair-pundits, here's your chance to help draft the legislation that you want to see.  It's a massive crowdsourced effort, like the Web itself.



  

Thursday, March 06, 2014

The Problem with Facebook and Gun Sales...

Here's a case where we can see the "code is law" principle play out right before our eyes.  After coming under scrutiny in recent weeks by a variety of pro-gun-control advocacy groups, Facebook decided yesterday to voluntarily place new restrictions on the selling of guns through its website.

To understand the scrutiny, consider that last week VentureBeat reported that it arranged to buy a gun illegally on Facebook in 15 minutes.  Also, the Wall Street Journal reported that both assault-weapons parts and concealed-carry weapon holsters have been advertised to teenagers on the site.  Additionally, Facebook "community" pages such as one called Guns for Sale with over 213,000 "likes", have been freely available to minors of all ages as well.

Specifically, Facebook has announced that they will begin to...

  1. Remove offers to sell guns without background checks or across state lines illegally.
  2. Restrict minors from viewing pages that sell guns.
  3. Delete any posts that seek to circumvent gun laws.
  4. Inform potential sellers that private sales could be regulated or prohibited where they live.
All of which seems well and good.  Even gun rights advocates shouldn't have too much of a problem with these measures considering that their intent is not ban gun sales on Facebook but rather to better enforce existing laws (which is an argument they commonly make themselves).

But here's the rub.  There's the little detail in the Facebook press release about how the company will rely on users to report posts and pages offering to sell guns.  

So let's be clear.  With the announcement of these measures, Facebook is pursuing a policy of reacting to illegal gun sales on its site, but will not be proactive in preventing them.

The reason has to do with, what The Nerfherder has previously dubbed, The Politics of the Algorithm.  Any advertisements Facebook displays on an individual's feed is not decided upon by human decision-makers, but by a mathematical algorithm.  As a result, a 15 year old from Kentucky might be shown an advertisement selling guns from someone in Ohio based on whether or not the algorithm determines he might be interested in it - regardless of the fact that it is illegal according to federal law to 1) sell guns to a minor, and 2) sell guns across state lines without a dealer license.

This actually happened last month.  The 15-year-old was later caught with the loaded handgun at his high school football game, and the seller has since been charged.

Facebook wants to address such safety concerns and, of course, limit its legal liability.  And (not to pick on them too harshly) these measures are at least a step in the right direction.  The problem is that it's practically impossible to truly regulate online content in accordance with the law when humans have been removed from the equation.  Such concerns are an inevitable consequence of social media's dependence upon algorithms - all of which, as this case illustrates, are both flawed and modifiable.


  

Thursday, February 27, 2014

WhatsApp, Messaging Wars, and Privacy's Demise...

There was a lot of commotion last week when Facebook announced it was acquiring WhatsApp for a stunning $19 billion.  Was that valuation insanely high?  Was this a signal that the market is experiencing a new tech bubble and that we can expect a round of major tech mergers and acquisitions this year?  Perhaps, as the New York Times suggested, the messaging app wars are just getting started?

Everyone has their own opinion about the WhatsApp valuation, but lost in all the hype is this...  privacy advocates have suffered yet another setback.

The very fact that Facebook is the acquirer - the same Facebook which has repeatedly come under fire for purposely obfuscating the ways in which individuals can control the privacy levels governing their own information - is the clearest signal of the direction the messaging industry is headed.  Public outrage over N.S.A. surveillance be damned, Facebook outwardly wants to start performing the same kind of data mining on, not only your statuses, photos, and videos, etc., but your smartphone messages as well.  The content of your messages will now surely be factored into its search engine and advertising algorithms.

It's not as if WhatsApp wasn't data mining its messaging service already.  The problem is that they are being so heavily rewarded ($19 billion for a company with 55 employees = $345 million of value per employee) for doing exactly what privacy advocates despise, and for doing it well.  Does anyone doubt that now every other messaging competitor is going to look at those numbers and try to emulate this model, if they weren't doing so already?

This need not be the case, and it's certainly not inevitable.  Let's propose an alternative model.  There's a messaging app called TextSecure which makes the bold assumption that people actually might value their privacy and prefer not to have all of their communications archived forever on some corporation's server and mined for data that will then be used for commercial advertising.  TextSecure is encrypted, is open source, and "the server never has access to any of your communication and never stores any of your data".

As consumers, we have a very real capacity to influence the direction of a lot of these policies.  Especially since all of these apps are the same price (free), making a conscious decision over which one to use and support is a decision that may have greater consequences in the long run than simply being a matter of which interface has a sleeker design.

In other words, there's something we can do about it.

  

Thursday, February 06, 2014

What Matters More in Building an Ultra-High-Speed Infrastructure - Speed or Reputation?

This morning NPR profiled the city of Chattanooga, Tennessee - which is the first American city with an ultra-high-speed fiber-optic network providing Internet access with speeds of up to one gigabit per second to every business, residence, and public and private institution.  For context, that's 50 times the average speed for homes in the rest of the country.

We hear all the time about the importance of creating a high-tech infrastructure for the 21st century.  How it will spur new businesses and job creation and stimulate new economic climates based on innovation.  But what does the case of the "Gig City" - which was rolled out in 2009 - say about infrastructure's actual importance?

A few things to factor into the equation...  First of all, there is the public vs. private issue to consider.  Chattanooga's gigabit network is taxpayer-owned, resulting from a $111 million federal stimulus grant in 2009 that was actually designed for the local power utility to create a smart grid, and that public utility then borrowed an additional $219 million to finish the project.  The fact that the network is publicly owned stands in contrast to privately owned gigabit networks now found in other cities around the country run by firms like Google.

Second, in the four years since its rollout, less than 8% of subscribers and only about 55 businesses have signed up for the gigabit service, which is priced at $70 per month.  This low adoption rate could seemingly make the case against the importance of a high-speed infrastructure, however, as J. Ed. Marston from the Chattanooga Chamber of Commerce has said, the high-speed infrastructure has done much to "invigorate the entrepreneurial scene".  For instance, the Chamber's INCubator includes 20 tech companies and a 91% success rate.

Third, on the job creation front, it is unclear statistically how much the high-speed infrastructure has made an impact.  According to the New York Times, while "The Gig" created about 1,000 jobs in the last three years, the Department of Labor reported that Chattanooga still had a net loss of 3,000 jobs in that period, mostly in government, construction, and finance.

Fourth, there is the familiar problem that, whenever a new ultra-high-tech infrastructure is rolled out, no one quite knows what to do with it.  As explained by Blair Levin of Gig U., no one is going to design products that can run only on a one-gigabit-per-second network if hardly any such networks exist elsewhere.

Which brings us back to our original question.  If a gigabit network has such low adoption rates, and it is unclear how much business growth or new job creation can be attributed to it, then how important is such an ultra-high-speed infrastructure, really?

Proponents will argue that its value shouldn't be quantified so narrowly, and that having such an infrastructure attracts capital and talent into communities that probably wouldn't flow into them otherwise.  However, while I agree with this line of reasoning, what must be remembered is that this isn't an argument in favor of ultra-high-speed networks themselves, but rather for what they represent.  What's most valuable for a community that invests in such a network is not necessarily the speed of their network, but rather the reputation that a community acquires for showing a willingness to invest in it in the first place.

Chattanooga's "Gig City" demonstrates that reputation trumps speed.


  

Thursday, January 30, 2014

"Hadrian's Firewall" and Internet Censorship in Britain...

Without much attention, just before Christmas British ISPs put into effect a new system whereby all Internet subscribers would be required to actively choose whether they wanted filtering that would block material in broad categories such as sex, alcohol, violence, and hate speech.  At first glance, this doesn't seem too awful.  The decision is in the hands of the individual consumer, and not the government or a private corporation, right?

But here's the rub.  As laid out by TechPresident's Wendy Grossman, the biggest complaints are that there is no transparency about what is being blocked, it's extremely difficult to get an innocent site unblocked, and that the filters can be easily bypassed by determined individuals anyway.  The patchwork of different ISPs using different filtering methods has made it "almost impossible for the owner of a small online business to find out if it's being erroneously blocked and by whom - and no ISP seems to have a clear mechanism for redress".

Furthermore, the "blunt-instrument approach" to categories can lead to major problems.  For example, very legitimate websites have been blocked including child abuse hotlines, suicide prevention sites, and even many police websites - linked in the broad categorization of the filters to "violence".  This is reminiscent of problems filters have raised in U.S. schools and libraries where, for example, information-based websites about breast cancer were categorized by algorithms as being linked to pornography.

Clearly this is a problem and a far too common consequence resulting from the very noble goal of providing parents with filtering options for their children.  However, the best strategy for providing parents with filtering choices ought to be based exactly on that - more choices.  Richard Clayton is right that the best path forward lies in making it easier for people to install good user-controllable filtering tools on their own machines rather than having them controlled at the ISPs end.  Not everybody in a household has the same needs and requirements, so putting the decision-making capability in hands of users, allowing for more customization and reviewable analysis, ought to help ensure that filtering does not become the first step in a slippery-slope towards censorship.

And for goodness sake, let's have a little transparency, please.


  

Tuesday, January 28, 2014

Will Snapchat Ever Be a Useful Professional Tool?

Snapchat is in that category of seemingly bizarre social media products that pundits mock and that causes laymen to scratch their heads - yet its use is so widespread that it recently rejected a $3 billion purchase offer from Facebook and can now lay claim to as many as 350 million snaps in a day.  For the uninitiated, it's a photo-sharing service where all images are set to self-destruct after 10 seconds.

But is Snapchat destined to be used almost exclusively by teenagers?  Or does it have a future as a valuable tool for professionals?

That's the question raised by K-Street Cafe's Norah Heintz.  In response to high-flying claims made by Pinger CEO Greg Woock that "erasable" social communication represents the future of the medium, she argues that Snapchat will never be able to compete with Facebook and Twitter because "it's far too private.  Sharing information about oneself is intrinsically rewarding, and I would go so far to say that if the personal information shared is programmed to disappear in seconds, it's fundamentally less satisfying to share".

However, what Heintz may be underestimating is the significant chilling effect caused by, what the New York Times' Nick Bilton has dubbed, "the anxiety of permanence".  Many individuals who have online social-networking accounts do not actively engage or post on them for fear of infinite archiving.  If you extend that logic then the possibility of "erasable" social communication may actually increase the number of active users participating in online social networks and/or significantly alter the types of communications people are willing to share.

For better or worse.  While clearly that principle would mean trouble in terms of the behavior of teenagers, it also holds intriguing potential in terms of professionals in a collaborative business environment.

What Snapchat has revealed is that there clearly is an undeniable market for erasable social media.  And my guess is that that market isn't confined to America's high schools.


  

Thursday, January 09, 2014

"Commotion" and Protecting Privacy Through Mesh Networks...

With last fall's revelations about widespread N.S.A. surveillance, a market has clearly emerged for enhanced-privacy software tools.  In a USA Today poll, 54% of Americans said they wanted more privacy even at the expense of some government security.  Now, the race is on to meet that demand.

While websites like Google and Facebook, and cellular companies like Verizon and AT&T, all try their damnedest to convince their users that their privacy is being protected, their respective measures only go so far - and certainly haven't protected against the type of surveillance engaged in by the N.S.A.  The real problem is with the telecommunications infrastructure.  Even if two people are exchanging a message while standing right next to each other, that message is still routed through a small handful of "chokepoints" - like a broadcast tower operated by Verizon, or a network hub owned by your ISP - and those are where the N.S.A. targeted its surveillance activities.

With a centralized backbone infrastructure being the problem, more people have come to realize that the only way to strengthen their privacy is by circumventing such corporate-controlled infrastructure in the first place.  To this end, the New America Foundation has released the Commotion Wireless Internet Project, a free and open source software toolkit to enable people to create decentralized ad-hoc mesh networks relatively quickly and easily.  These mesh networks directly connect one device to another - whether cell phones or laptops, etc. - thereby creating an intranet-like network on a local level where the devices themselves form the infrastructural backbone.  In other words, returning to the above example, if two people try to exchange a message while standing right next to each other, Commotion would send the message directly from one person's device to the other's, bypassing the traditional chokepoints.

Mesh networks are not a new technology, but with the recent shifting of public opinion on the privacy issue, virtually any software that acts as an Internet privacy, security, or circumvention tool is sure to get new (or renewed) attention.  The demand is there, and in the developer community, the race is on.


  

Sunday, December 22, 2013

Why Aren't Faculty Driving the Conversation About MOOCs?

Massive Open Online Courses (MOOCs) continue to be all the rage.  Institutions of higher education are offering them in ever-greater numbers and an entire industry has sprouted up in the private sector seeking to deliver them.  They're often touted as innovative vehicles for expanding access to higher education and as a potential savior for cash-strapped universities.

However, as Susan Meisenhelder asks in this month's journal of Thought & Action, why isn't the professoriate driving the conversation about MOOCs?

Meisenhelder takes issue with both primary claims about MOOCs being a force for good.  First, she cites that, rather than expanding access to higher education for low-income people who might not otherwise be able to afford it, MOOCs are more likely to increase the digital divide.  Here are the telling statistics:  drop-out rates in the courses hover around 90 percent, and out of that tiny percentage who actually do obtain a "pass" for the course, 85 percent already had a BS or BA degree, and 80 percent said they had taken a comparable course in a regular university before enrolling.  Thus, she refutes the access claim because those who are most likely to benefit are already "technologically-savvy, academically well-prepared people" and that the data suggests MOOCs are "just the latest push toward a two-tiered higher education system based on social class".

Second, she questions MOOCs claim as a component of higher education at all.  Most consist of little more than "sage on the stage" lecture videos, offer no interaction with the professor, there is little to no required reading, tests are multiple choice, there are few, if any, writing assignments, and because of class sizes that can be in the tens of thousands, student assessments and grading is performed almost exclusively by other students while the search continues for "satisfactory" robo-grading programs.  She rightly states that "any faculty member teaching an in-person course with these characteristics could expect the harshest criticism".

I would like to chime in at this point to express my agreement, particularly with her "quality" argument.  Having enrolled in a few MOOCs myself, I will attest that they should in no way, shape, or form be thought of as a replacement for traditional university classes.  If nothing else, how can someone claim to be receiving an "education" when they often have no ability to ask questions?  However, I'll also stake out a middle-ground position and simultaneously argue that, while MOOCs surely are no replacement for traditional classes, they definitely should be considered "another tool in the box".  Certain types of courses - particularly, on technological subjects that are more instructive and less conversationally-driven, by nature - can indeed be valuable.  They just shouldn't be worth any college credits.

Which brings me to the "access argument".  The jury is still out on the extent to which MOOCs can realistically serve low-income populations, but one thing that's not in dispute is that higher education is inaccessible to too many Americans, mainly due to cost.  In that context, is it so bad to have "Intro to Programming" classes free and open to everyone - including those with maybe only a passing curiosity on the subject?  I think not.  In fact, higher education may actually benefit in the long run from people being able to freely sample different academic fields and indulge their curiosities.  Again, so long as they're not perceived as a replacement for the genuine article, the more free educational content, the better.

In the end, Meisenhelder considers how the professoriate can contribute more towards the MOOC conversation.  She recommends 1) pushing for those private institutions offering MOOCs for credit to inform students about the data on success in MOOCs - empowering students to make their own informed choices; 2) conduct further independent research on MOOCs so as not to rely on the "research agenda" driven by universities sponsoring them and corporate providers; 3) follow the money being made by corporations and "the cottage industry of consultants driving the MOOC train"; and 4) reaching out more effectively on these issues to students who "will answer these questions themselves if we ask the right ones".  I'd like to also suggest that faculty members actually enroll in a MOOC to experience the positives and negatives first-hand, and let that experience, rather than emotions or preconceptions, inform their opinions.

The original promise of MOOCs was expanded access to quality education.  It remains so.  Teaching faculty, in becoming more proactive in the debate, have an opportunity to directly improve upon that promise.


  

Tuesday, December 10, 2013

An Hour of Code...

You may have been surprised to see that Google's homepage yesterday included this message under its search box: "Be a maker, a creator, an innovator. Get started now with an Hour of Code".

An Hour of Code is a project led by Code.org that aims to introduce everybody to at least a modicum of understanding what is computer programming. Its underlying logic, quite clearly, is that in this digital age it's arguably impossible for individuals to understand their daily reality if they don't understand the programming behind so many aspects of their social, economic, and cultural existences.

It's a noble goal, to be sure, and their website offers a petition and various tools for educators to incorporate an hour of teaching code in their classrooms. However, most people tweeting their participation seem to be focusing their efforts heavily on HTML. Maybe that's a good place to start for, say, young students, but HTML isn't truly a programming language. The Hour of Code project seems to again raise the important distinction between being code-literate versus being a good programmer.

A helpful suggestion might be for introducing people to code through the use of visual programming languages which have been gaining in popularity and are often designed expressly for novices. I can attest that some computer science professors at my university have used VPLs and editors like Scratch, Blockly, and AppInventor to introduce New York City public school teachers to programming concepts while on sabbatical.

It's terrific that An Hour of Code has received support from the President and other high-profile individuals. Let's just remember that it can really only be considered a success if it leads to students wanting to pursue more than just One Hour.


  

Wednesday, November 20, 2013

Should the 99% Harbor Resentment Against the Tech Elite?

An eye-catching story from the front page of The Economist grabbed my attention this afternoon. Adrian Wooldridge writes that there's a coming "peasants' revolt against the sovereigns of cyberspace". He argues that people's love of iPhones and other popular gadets has thus far largely exempted the tech-elite from Occupy Wall Street-style protests against the plutocracy, but that it inevitably can't last.

Is there indeed a fundamental concentration of power worthy of concern? Consider the question in relative terms to other industries. Wooldridge raises the example that Mark Zuckerberg owns 29.3% of Facebook and Larry Ellison owns 24% of Oracle. By contrast, the largest single investor in Exxon Mobil controls only 0.04% of the stock.

A few years ago the new economy was a wide-open frontier. Today it is dominated by a handful of tightly held oligopolies. Google and Apple provide over 90% of the operating systems for smartphones. Facebook counts more than half of North Americans and Europeans as its customers. The lords of cyberspace have done everything possible to reduce their earthly costs. They employ remarkably few people: with a market cap of $290 billion Google is about six times bigger than GM but employs only around a fifth as many workers. At the same time the tech tycoons have displayed a banker-like enthusiasm for hoovering up public subsidies and then avoiding taxes. The American government laid the foundations of the tech revolution by investing heavily in the creation of everything from the internet to digital personal assistants. But tech giants have structured their businesses so that they give as little back as possible...

Growing political involvement will inevitably make these plutocrats powerful enemies. Right-wingers are furious with their stand on immigration. Others are furious with them for getting into bed with the national-security state. Everyone with any nous is beginning to finger them as hypocrites: happy to endorse “progressive politics” such as tighter labour and environmental regulations (and to impose the consequences of that acceptance on small business) just so long as they can export the few manufacturing jobs that they create to China.

Without placing a value judgment on whether public resentment towards the wealthiest 1% is a positive or negative social characteristic, a different question beckons... What, if anything, sets the tech-elite apart from those wealthiest of plutocrats in other sectors?
  

Wednesday, November 13, 2013

Is Bitcoin a Form of Hacktivist Protest Software?

Just brainstorming a few ideas for a conference paper and thinking of last week's demonizing of the Bitcoin virtual currency in Time magazine's cover story, linking it to the Deep Web...

Hacktivism is a phenomenon that has been around for some years now, but what's increasingly attention-worthy is that the idea of computer hacking for political purposes is gradually evolving away from hackers writing code and, instead, is now centered on the advent of social protest software. Whereas, in the past, a hacker or group of hackers would, for example, utilize their knowledge of code to launch a distributed denial of service (DDOS) attack against a website, what is becoming more common is for hackers to develop user-friendly software that anyone in the mainstream public can then download and use to launch their own DDOS attacks by simply stepping through a wizard, clicking a button, and not having to understand any code at all.

Some quick examples of such social protest software include Tor, PHProxy, Cain and Abel, NetTools, WireShark, AngryIP, and dozens of others found on sites like SourceForge:DDOS or AstaLaVista.

Of course, most of these software applications have very legitimate uses and are in no way illegal or, for that matter, should necessarily even be deemed suspicious. But what is a handy network monitoring tool for one person may be, well, a network monitoring tool serving a very different purpose for another.

First question - To what extent can this debate be framed in terms of the "Code As Speech" literature? In other words, does the very existence of such software constitute a form of political speech or protest, or is such software merely a delivery vehicle, or forum, for political speech or protest? To put it yet another way, is the software a tool or is it an end in and of itself.

Second question - In terms of cybersecurity, what policy responses to the rise of hacktivist software can we uncover? At quick glance, it seems the only notable responses have involved high-profile arrests or enhanced sentencing. But, again, most of the software is legal and has very legitimate uses, so perhaps that shouldn't be surprising.

Third question - How can we draw a categorical distinction among the various software applications between those that are tools for hackers versus those that are hacker tools for ordinary people?

Fourth question - Why have such hacktivist software tools thus far largely failed to go mainstream viral? Are the tools not good enough? Are people simply unaware of their existence? Or is it that people just aren't that interested in enagaging in hacktivist activities?

In order to shed light onto some of these questions, it would be interesting to perform a detailed case study on Bitcoins. As a virtual P2P currency - decentralized, "mined" through computational processes, and traded on cyber exchanges - one could argue that the Bitcoin system's very existence constitutes a direct challenge, or protest action, against established institutional currency regimes. As such, the Bitcoin system is, by design and by definition, a form of hacktivist software. Perhaps we can even label it a "protest currency".

Or not :-)

  

Tuesday, November 05, 2013

The Public Outcry Over N.S.A. Surveillance Isn't Going Away...

In September, when it was reported that the N.S.A. had been engaged in mass surveillance of virtually all Internet traffic, there followed a public outcry that has yet to subside. However, to many individuals working within the intelligence community, and the larger national security complex in general, the reaction was more ho-hum; something of a shrugging of the shoulders. Indeed, the surprise was that so many people were so surprised.

With the passage of a few weeks, we all, by now, have had the chance to process these events and should now start giving this issue some meaningful perspective.

First of all, and let's not beat around the bush, yes, the government is monitoring all Internet traffic. And while that's a potentially frightening proposition, two things need to be kept in mind before people take an alarmist position - 1) this is nothing new; the federal government has been trying, for many years, to perform such all-encompassing surveillance of cyberspace, and doing so in full public view, as is the case with Clipper Chip proposal in the 1990s; 2) it must be understood that there is no individual in the N.S.A. or any other agency sitting at a desk and reading your emails. The entire program is implemented using data mining - which means that a software algorithm seeks out specific patterns and raises a flag when it finds one. That's a beast of a completely different sort.

Second, and let's be clear about something else, private businesses and corporations have been monitoring all of your Internet traffic for years too. Whether it's Google or Verizon or Apple or Facebook, every single e-focused corporation in existence monitors the content of your emails, your search queries, your browsing history, and your social networking behavior to the largest extent they can technically achieve. AND you have given them your explicit permission to do so by signing their Terms of Service agreement when you first began using their service.

So Big Brother clearly exists, and has for quite a while, in the form of both the government and private corporations. So why the discrimination in public outcry?

Third, on the technical side, intentionally creating backdoors into the hardware and software components of virtually every product on the global market, which is what the government has reportedly attempted, is a horrific mistake. It's the false logic that, "in order to make everything more secure, we need to make them less secure". Or we can use the analogy that it's as if the government mandated that every house have a key left under the front doormat, just in case they ever needed to look inside without your permission, and the whole system being dependent on no one ever discovering that every other house has their key under their doormat too. To say this is counterintuitive gives it too much credit. As a matter of fact, this strategy diminishes the security of the nation's critical infrastructure and cyber assets.

Fourth, it's important to remember the stated purpose of the surveillance efforts which is to keep Americans safe from terrorist attacks. Certainly, that's not to argue that the ends always justify the means, and that any and all actions taken towards that goal must be always permissible. However, what it ought to do is emphasize the point that this is an issue between two competing core values - privacy and security. Both are positive values in the American political system, thus neither side warrants being disregarded or demonized. Rather, this is a case where two core positive values have come into conflict with each other and we each have to decide at what point on the spectrum we believe the most prudent strategy rests.

Finally, from a prescriptive point of view, there is a way forward that can continue enhancing our security while still assuaging people's concerns over privacy rights - more transparency. Based on the public outcry over the N.S.A.'s surveillance efforts, but not Google's, I would argue that the issue is less about the surveillance itself and more about the intense secrecy behind the program. Secrecy and covert actions taken by clandestine government agencies, with virtually no oversight nor check on their power, is the absolute enemy of liberal democracy. Period. More transparency about general strategic policy, while still keeping technical implementation measures classified, would go a long way towards striking that critical necessary balance, allaying the public's fears, and keeping the government accountable to the People. After all, let us never forget, the government is not only meant to serve the people; it is meant to be BY the People.

  

Wednesday, October 23, 2013

How Social Media is Used in Hiring and Firing Employees...

Here is an infographic generously sent to me by Aria Cahill related to previous Nerfherder posts about the demographics of popular social media sites, the online digital divide, and the growing online generational divide...


FIRED-FOR-FACEBOOK
Source: Online Paralegal Programs
  

Thursday, June 13, 2013

Apps that Track Stolen Smartphones Are Pretty Worthless...

There are dozens of mobile apps out there designed to let you track your phone if it's ever lost or stolen.  The best of these are Prey (for Android smartphones), LoJack (for laptops), and Find My iPhone (for iPhones, obviously).

They tend to be great for when you accidentally lose your phone in the deep recesses of your couch cushions.  However, when your phone is actually stolen, these apps are pretty worthless - despite how they're marketed.

When my father recently had his iPhone stolen - from his hospital bed, no less - he used the "Find My iPhone" app within a short time from when it must have been taken.  Sure enough, the app worked marvelously.  It displayed the exact location of the phone at that very moment - at a specific intersection in Queens.  He was very excited at the prospect that the app might actually lead to the phone's recovery. 

But when he called the police, they informed him there was little they could do about it.  You see, despite knowing the exact location of your stolen property, before the authorities can legally act, you still have to file an official police report; hopefully that leads to a subpoena being sent to the IP addresses' internet service provider; and only then can the police search the exact location specified in the subpoena.  This process takes days or even weeks, and as a result, it's unlikely the phone will still be found at the same intersection in Queens.

The lesson of the story is that these "tracking apps" do work, but the law is far behind the technology.  So short of tracking your phone's location down personally and confronting the thief face-to-face (which is highly unrecommended), understand these apps' gross limitations.


  

Monday, May 20, 2013

Will Mobile be the Death of Open Source?

Remember when free, open source software was going to topple Microsoft and Apple, and Linux was going to revolutionize the world?  The Open Source Movement - yes, a genuine social movement - has often been heralded for both producing great software and for providing an alternative ideology for how and why things ought to be accomplished.  The only problem has been its inability to go mainstream.  Despite years of constantly churning out high-quality products, it has stubbornly remained solely in the realm of the programmer/hacker community and within the halls of high-tech industry. 

But this is old news.  What's becoming a far more urgent existential threat is mobile computing. 

Even long-time members and supporters of the Open Source Movement have rarely stuck to open source for mobile.  One main reason is hardware.  When you buy an iPhone you're not empowered to customize the device the same ways you'd be able to on a PC.  Mobile devices themselves have far more built-in controls which means you often have to "jailbreak" them in order to do any basic tinkering.  Another reason is service providers.  When you buy a smartphone it would lose most of its value if you didn't subscribe to a cellular carrier's plan to actually, you know, use your phone as a phone.  But the trade-off is, again, more controls being placed upon your phone's level of operability.

Open Source is certainly not dead in mobile computing (yet).  Ubuntu Touch is soon to be released and, stemming from the Linux family tree, will serve as an open source alternative for non-Apple-based smartphones and tablets.   And, of course, Google's Android is (sort of) open source as well. 

But where's the buzz?  Where's the old energy?  Any supporters of the Open Source Movement need to be concerned about the current trajectory things are going in - and do something about it.  Download, experiment, and, above all, PLAY!


Open source operating systems:
  • Ubuntu Touch
  • Firefox OS
  • Sailfish
  • MeeGo
Mobile Apps:


  

Monday, April 29, 2013

Social Networking Sites Become Increasingly Political...

Since its inception, the Internet has often been heralded by some as a potential tool for raising levels of civic and political engagement.  New evidence seems to (finally) support this claim.

From 2008 to 2012 there was a major jump in how many online social network users engaged in some type of political activity, according to a new study by the Pew Internet & American Life Project.  Most notably, the overall number of people who posted links to political news stories or articles rose from 3% to 19%.

Here are a few other notable statistics:
  • 38% of SNS users "Liked" or promoted political material that others have posted.
  • 35% encouraged other people to vote.
  • 34% posted their own comments/thoughts on political issues.
  • 33% reposted political content.
  • 31% encouraged others to take action.
  • 28% posted links to political stories for others to read.
  • 21% belonged to a group that is involved with a political issue or promoting a political cause.
  • 20% actually followed elected officials or public figures.
Also, significantly, the total amount of SNS users who said they engaged in at least one of these activities was 66%.

What would be interesting to see is further research on the growth patterns for specific social-network sites.  For instance, how people use Facebook is often very different from how they use Twitter or Instagram, so breaking down these overall numbers into more detail would be instructive.  Also, correlating these numbers with voter turnout patterns or political party identification might also be revealing about the current state of affairs.

Nevertheless, considering that 66% of social-network users are engaging in some sort of political activity online, is that surprising, or does it merely confirm what you may have already noticed in your increasingly politicized news feeds?