Wednesday, August 01, 2018

A Quick Guide (or Direct Recommendations) for Cutting the Cable Cord...

After considering for a long time whether or not to "cut the cord", I finally took the plunge.  There are some very detailed online guides that thoroughly explain all of the options and considerations involved, but it's easy to get lost in those until your mind is full of indecisiveness and uncertainty.  So here is a super-quick guide - direct and to the point - for anyone who just needs a straight-up recommendation...

  1. Cancel your cable service.  I had been paying $155/month to Verizon for their cable-internet-phone bundle.  I called and asked to cancel all three and made clear that the reason was that I wanted to "cut the cord". The rep immediately asked if I'd be willing to pay $65/month for just internet, to which I declined saying that was still more than I wanted to pay. The she said she could do it for $50/month.  DONE. 

    Cost: From $155/month to $50/month. Savings = $105/month.

  2. Buy a Roku Stick for your TV.  Normally they are about $50, but... (keep reading)...

  3. Subscribe to DirectTVNow.  Yes, there are cheaper options, but DirectTVNow is the closest thing to your familiar cable setup out there (for those, like me, who fear the unknown). It includes almost every channel I used to have, all of it streaming live as its broadcast so that it FEELS like regular TV, and even includes all your local broadcast networks like NBC, CBS, ABC, Fox, etc., so that you don't even have to deal with buying an HD antenna. Plus, there's no monthly contract. 

    Cost: $40/month, and they send you a free Roku Stick.

  4. Bump up Netflix.  I already was a Netflix subscriber, but with all the savings of cord-cutting, I upgraded my plan from 2 screens to 4. 

    Cost: From $11/month to $14/month.

  5. Keep Amazon Prime TV.  I already subscribed to this and am not changing anything. 

    Cost: Nothing additional.

  6. Keep HBO Go.  I was already an HBO Go subscriber as well. 

    Cost: Nothing additional.

  7. Download free Roku apps.  There are tons out there, but just for common TV shows and movies that you might actually want to see, I recommend downloading Pluto, Vudu, and Tubi.  I'm already addicted to watching The Onion station

    Cost:  $0.

Total change to my monthly all-inclusive TV bill:  $171/month to $98/month.  Savings: $73/month, or $876/year.

To re-iterate, on this plan, I'm not losing any of my old cable channels (that I actually watched).

And I'll probably even cut costs further once I find my new comfort zone.


Thursday, June 28, 2018

The Ethics of Algorithmic Governance...

I just wanted to quickly share my Prezi slideshow from my presentation on "The Ethics of Algorithmic Governance" at the International Conference of the Digital Government Society (#DGO) earlier this month.

You can also read about the panel in greater detail in the Conference Proceedings here:

Domanski, R., Estevez, E., Styrin, E., Alfano, M., & Harrison, T. Towards an ethics of digital government: A first discussion. In Proceedings of the19th International Conference of the Digital Government Society. Delft,Netherlands, June 2018. 4 pages. DOI: 10.1145/3209281.3209322

As always, feedback is welcomed.


Thursday, March 22, 2018

What should guide 'Algorithmic Auditing' in theory and in practice?

In response to how "algorithmic bias" has been suddenly thrust upon the national agenda with the publication of books like Noble's Algorithms of Oppression and O'Neil's Weapons of Math Destruction, calls for transparency and accountability are pronounced almost as a knee-jerk reflex.

Transparency, in this context mainly referring to private corporations with black-box proprietary algorithms being required to open their source code and let outsiders view those decision-making formulas, quite simply is never going to happen on a grand enough scale to matter.  Ask Google to let the world see its search algorithm, or Netflix to open up its hood and allow us to see how it recommends new shows and movies, and see how far you get.  It would be like asking Coca Cola to publish its secret recipe in Reader's Digest.

From a regulatory perspective, requiring a high-level of algorithmic transparency is not - and probably never was - a realistic approach.  And there's an additional major problem which is that just knowing an algorithm isn't even necessarily helpful because the specific data that feeds it is often the key factor in understanding its produced results.

On the other hand, calls for accountability are more intriguing.  Accountability, in this context referring mainly to those private corporations being penalized in some way for the most egregious examples of algorithmic bias, focuses on outputs.  For instance, regulators may not need to actually see Google's algorithm in order to determine that there's something horribly wrong with a search for the phrase "three black teenagers" resulting in images of mugshots while, simultaneously, a search for "three white teenagers" resulting in images of smiling beachgoers.

If some type of penalty system were devised to hold corporations accountable for such outputs, the means of analysis would be "algorithmic auditing".  In fact, New York City is already pursuing this track with its newly created Algorithm Task Force.  But what should such an algorithmic audit actually look like?  What criteria should it use for scientific testing?

There are several categories of algorithmic audits.  In their paper, "Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms", the authors propose the follow taxonomy:

  • Code Audits - based on transparency, as described above
  • Noninvasive User Audits - surveying users about their interactions with the platform
  • Scraping Audits - issuing repeated queries to observe the results
  • Sock Puppet Audits - using automated software to inject false data into the platform
  • Crowdsourced/Collaborative Audits - using human testers to inject false data into the platform
Each of these types have their own pros and cons. Most likely, the characteristics of each specific case would warrant which type would be most appropriate to implement.

An alternative and, in my opinion, more useful and comprehensive framework for ensuring accountability through regulatory policy focuses on these five core principles as proposed by Diakopoulos and Friedler:

  • Responsibility - for any algorithmic system, there needs to be a person with the authority to deal with its adverse effects in a timely fashion.
  • Explainability - any decisions produced by an algorithmic system should be explainable to the people affected by those decisions.
  • Accuracy - sources of errors and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked.
  • Auditability - algorithms need to include in their design a way that enables third parties to probe and review their behavior.
  • Fairness - all algorithms making decisions about individuals should be tested and evaluated for discriminatory effects.
The surest, and most likely, path forward is for governments to experiment with different policies pursuant to both transparency and accountability as they perform analysis of their own algorithms.  After all, this occurs in the public sector too, and some might argue on an even more consequential issues.  Government regulators should make strides to get their own house in order, learn valuable lessons, and then they'll hopefully be better prepared for the massive wave of resistance that they're certain to encounter when attempting to bring those regulatory proposals to the private sector.


Tuesday, February 27, 2018

Does the U.S. Government's right to examine digital evidence stop at the border?

The Supreme Court is hearing a case this week, U.S. v. Microsoft, which will have broad implications in defining the scope of governmental powers in the age of cloud computing.

As described in the New York Times, in 2013, federal agents served Microsoft with a warrant to get the emails of a suspected drug trafficker. Microsoft turned over the suspect's account information and address book, but did not turn over the content of the email messages themselves. Those messages were stored on servers located in Ireland.

The question before the Court is whether the government can obtain warrants pertaining to assets located outside its own borders.

Craig Newman is right that both sides have a legitimate argument to make.  On the one hand, the government - with a warrant - should be able to investigate things like drug trafficking, terrorism, child pornography, and fraud to the greatest extent the law allows. Where the server storing the 0s and 1s of a particular email happens to be located on any given day should be a secondary concern to the larger interest of public safety.

On the other hand, Microsoft rightfully asserts that, if the U.S. government can potentially have access to all data stored by American-based companies - even on overseas servers - then, at best, American tech companies will be at a serious competitive disadvantage, and at worst, they won't be allowed to conduct business at all with many of the largest economies in the world that have stronger privacy laws in place. Germany, for example, has already publicly stated that if the court rules against Microsoft in this case that it will not use any American company for its data services.

Newman proposes a solution whereby the determining factors ought to be the citizenship and geographical location of the individual whose data the government seeks, rather than the physical location of the data. However I believe he misses the point. By his own admission, the real problem lies not in where the server is physically located, but in how data is dynamically stored. All of the 0s and 1s that together make up a single email message are increasingly broken up into millions of individual bits that are then stored on different servers in different locations around the world.  It only becomes, what we perceive as, a single email message when it is reassembled on the requesting end.  In fact, to think of an email message as one single entity stored in one single place is an outdated and increasingly obsolete notion.

It would also be problematic to pursue another suggestion - that the determining factor should be based on where the data can be accessed. The argument there goes that, despite the server being in Ireland, the fact that Microsoft could click a button in Redmond, Washington and immediately have access to those emails ought to guide the law.  However, this too would open up a Pandora's Box considering that the entirety of the Web itself is based on such a system of data, software, and front-end interfaces (i.e. - websites) being accessible with a similar click of a button. Would such a plan, by extension, also grant the U.S. government jurisdiction to virtually all public facing content on the Web?  It's a slippery slope.

Personally, I believe that the public safety interests here ought to trump the (legitimate) fears of an extension of the surveillance State.  My reasoning is that, because of dynamic cloud-based storage, to only grant law enforcement the ability to investigate digital assets that are exclusively located in the geographic U.S. will make it virtually impossible to conduct any type of modern investigation. The digital realm is, after all, borderless, by definition. The legislative effort should focus instead on creating some other checks intent on curbing the actual abuse of such power.

And, no, the blockchain is not the answer either.


Monday, December 18, 2017

How to setup a Raspberry Pi SenseHat...

You know you're in-between semesters when you just want to catch up on writing how-to tutorials.  Here's one for my new favorite toy:  The Raspberry Pi "SenseHat".

The SenseHat is an add-on to the main Pi that includes an LED matrix, a mini-joystick, and six different sensors - all programmable.  They are:

  • Gyroscope
  • Accelerometer
  • Magnetometer
  • Temperature
  • Humidity
  • Barometric Pressure

You can get it from Amazon for $49.99 or from for $34.99 (plus $9.95 for shipping).

It was even used on the International Space Station last year.

Easy steps for connecting the SenseHat to the Pi:

1.  Unplug the Pi and take it out of its case if it's in one.

2.  Push four of the screws up through the bottom of the Pi.  Connect them above-board with the black cylinders.

3.  Connect the black pin base to the Pi's pins.  Make sure they're lined up well before pushing down, but once they are, push down all the way.

4.  Connect the SenseHat to the new pins.

5.  Screw in the remaining screws to anchor the SenseHat to the Pi.  Your Pi now looks like a double-decker device.  When you plug the Pi back in, the SenseHat will light up in a rainbow pattern if it's installed correctly.  And you're done.

Now to actually start using it:

1.  Make sure the Python library you'll need is installed (and install it if not).  At the Pi's terminal window, type:

sudo apt-get install sense_hat

2.  From the Raspbian Start button on the Desktop, click "Programming" --> "Python 3 (IDLE) --> In the window that opens, click "File" --> "New File" --> and type the following code:
from sense_hat import SenseHat
sense = SenseHat()
sense.show_message("Hello World!")

3.  Save the file, then from the top menu click "Run" --> "Run Module".  Quickly take a look at the Pi/SenseHat at you should see the text scroll across the LED screen like this:

Now you're completely set up and ready to program anything you want.  The API Reference can be found here.  Some starter ideas I plan on exploring using the sensors include programming a compass, a backyard weather station, an actual joystick for PyGame, and a motion-activated alarm system.  I also plan on creating a new GitHub page to share with you some of these scripts that I'm going to write soon.  Play!


How to Install Kodi and Covenant on a RetroPi...

Here is yet another tutorial because I get asked how to do this constantly for people.  Kodi is the media center software.  Covenant is the software specifically for streaming videos.

These instructions assume that you already have a RetroPi operating system running on a Raspberry Pi.  And before you begin, make sure your Pi is connected to both your keyboard and wi-fi.

Steps for installing Kodi:
(based on this tutorial from

1.  Once your RetroPi has launched, navigate to the "RetroPi" option in the carousel menu (the one that's an image of a joystick).

2.  Select "RetroPi Setup".

3.  The screen will go blue.  That's ok.  Select "Manage Packages", then "Manage Optional Packages".

4.  Scroll down and select #308: Kodi.

5.  Select "Install from Binary".

6.  The screen will go black and text will begin scrolling.  Let it run for about 5 minutes.  Kodi is now being downloaded and installed.

7.  When finished, escape back to the RetroPi main menu, and restart Emulation Station.  On the carousel menu, instead of seeing the old "Retropi" graphic with the joystic, you should see a new option labelled "Ports".  That's it.  Kodi is ready to roll.

Steps for installing Covenant:
(based on this tutorial from

1.  Go into "Ports" and open Kodi.  Click the "Settings" icon in the top-left, then select "System Settings".

2.  In the bottom-left, click the "Settings" icon until you get to "Expert" mode.  Then, on the left sidebar, select "Add-ons"  -->  Turn on "Unknown Sources"  -->  then click "Yes" on the popup window.

3.  Escape to the main Kodi menu.  Click the "Settings" icon in the top-left again, then select "File Manager" and "Add Source".

4.  Click where it says "None".  Enter the following URL:

5.  Where it says "Enter a name for this media source", type "xvbmc".  Click "OK".

6.  Go back to the Kodi main menu again.  On the left sidebar select "Add-ons".  Then click the "Package installer" icon in the top-left (looks like a Dropbox icon).

7.  Select "Install from zip file"  -->  scroll down and select "xvbmc"  -->  select "".

8.  Wait for a popup to appear in the top-right that says "xvbmc Repository Installed".

9.   Click "Install from repository"  -->  Click "XvBMC (Add-ons) REPOsitory"  -->  "Video Add-ons"  -->  "Covenant"  -->  "Install".

Give it a few minutes, but you should be all finished.  To actually use Covenant to find and stream videos, it will be located from the Kodi main menu under "Video Add-ons".



Wednesday, November 15, 2017

Equality Under the Law, via Algorithm?

This summer a bill was introduced in the U.S. Senate that had a very noble goal - to reduce bias and discrimination in the criminal justice system by reforming how bail is determined.

Sponsored by Senators Kamala Harris (D-CA) and Rand Paul (R-KY), the Pretrial Integrity and Safety Act attempts to address the problem as described here in the New York Times...

  • 450,000 Americans sit in jail today awaiting trial because they cannot afford to pay bail. 
  • Nine out of 10 defendants who are detained cannot afford to post bail, which can exceed $20,000 even for minor crimes like stealing $105 in clothing.
  • Whether someone stays in jail or not is far too often determined by wealth or social connections, even though just a few days behind bars can cost people their job, home, custody of their children — or their life.
  • Excessive bail disproportionately harms people from low-income communities and communities of color.
  • Black and Latino men respectively are asked to pay 35 percent and 19 percent higher bail than white men.
  • Bail is supposed to ensure that the accused appear at trial and don’t commit other offenses in the meantime. But research has shown that low-risk defendants who are detained more than 24 hours and then released are actually less likely to show up in court than those who are detained less than a day.
  • Our bail system does not keep us safer. In a study of two large jurisdictions, nearly half of the defendants considered “high risk” were released simply because they could afford to post bail.

Clearly, the current bail system has major issues.  However, the Kamala-Paul bill seeks to remedy those issues by replacing the money bail system with risk-assessment scores - and that raises a host of new issues in its stead.

A system determining whether a defendant should be released before trial based on individualized risk-assessment scores essentially works like the system for how banks decide who to give loans to based on credit scores.  An algorithm factors inputs like criminal history, substance abuse, etc., to "predict" the likelihood of flight and threat to the community.

Ideally, such a system would replace judicial bias in determining bail amounts by using agnostic data instead.

But data is not agnostic!  It draws on decades of historical records in which bias was in effect.  In this way, the danger is that, rather than reducing or even eliminating bias in the criminal justice system, it would actually further entrench it.  And algorithms are only as good as the data they're built upon.

And this doesn't even get to the question of whether or not the algorithms will be transparent to the public.  Nor does it raise any flags about the likelihood that such algorithms are usually, at least in the private sector, also influenced by aggregate population inputs that have nothing to do with the individual.  For instance, would it be acceptable for someone's "predicted" risk-assessment score to be higher based on what neighborhood they lived in, whether their parents were single, married, or divorced, or what educational institutions they attended?

Don't get me wrong.  Both Sens. Harris and Paul should absolutely be commended for trying to fix an unfair system, and for doing so in a nonpartisan manner.  But the real question is whether an algorithmically-driven system would be better than the current one.

If your livelihood was on the line, would you feel more comfortable with your fate decided by a judge or an algorithm?

It's an uncomfortable question, but it's not hypothetical or futuristic.  Again, this is a bill that's already been sent to committee in the U.S. Senate.

Someone I met from the Brennan Center from Justice, Vienna Thompkins, emailed me these insights as well:

"The press that I've seen on the bill so far barely mentions the issues with bias in risk assessment tools, or the proprietary nature of algorithms when third parties are involved in the process. I've had trouble in the past digging up data that confirms the current prevalence of "human" or interview-based risk assessments versus algorithmic assessments across states, so it's difficult to gauge what the impact of this incentivization might be. I imagine, though, that private organizations that provide risk assessment services would jump at the opportunity to promote the bill and expand their customer base."

It's wise of the bill's co-sponsors to roll this out first on the state-level - a federalism-based "policy laboratory" approach.  My takeaway is that the algorithms producing the individualized risk-assessment scores are going to be imperfect, to say the least, and this proposed system is likely to create as many problems as it solves.  The bottom line is that caution should be exercised, and if the bill becomes law, we should scrutinize the results with a healthy dose of skepticism.  On the other hand, if viewed simply as a first draft, it's a step in the right direction.


Wednesday, September 27, 2017

VpnMentor's Feed on Internet Censorship...

Here's another post I'm writing because I keep getting asked about it. It's a quick plug for a site I have found to be pretty valuable.

vpnMentor maintains a regularly updated feed on events related to Internet censorship around the world.

What I particularly like is that there is both a feed for news events related to Internet censorship as well as another feed that provides advice and how-to tutorials for taking steps to protect your privacy and use VPNs to evade, say, Chinese filters.

If you're interested in the subject or want to learn how to better protect yourself online, check it out.


Thursday, September 21, 2017

How to Setup a RetroPi Gaming Console...

Because I keep being asked to help folks turn a Raspberry Pi into a RetroPi gaming console, and haven't found any online tutorials I'm in love with, here is my own quick, no-nonsense, how-to guide...

  1. Buy a Raspberry Pi Kit, MicroSD Card, and Gaming Controller from Amazon.

  2. Download the free RetroPi software

  3. You will now need to move that RetroPi software onto the MicroSD card.  To do this on a PC, download the free Win32 Disk Imager software.  Now, insert the MicroSD card into your PC.  Install Win32DiskImager and run it.  Browse to find your RetroPi file (it will be named something like "retropie-4.2-rpi2_rpi3"), then choose your MicroSD card's drive letter as the "Device", then click "Write".

  4. When it's finished writing the disk image to the MicroSD card, remove the card from the PC, and insert it into your Raspberry Pi (there's a small slot on the side for it).  Now connect your Raspberry Pi to your monitor, keyboard, and game controller, and plug it in to power it on.  You will see a black Terminal screen loading the software.  Don't do anything.  Just give it a few minutes and let it run.  When it's finished, you should see this screen...

  5.  You will now be asked to configure your game controller.  Press each button on the game controller as requested.  For example, when it lists the "left button", hit the left arrow on the controller; when it lists the "A" button, hit "A" on the controller, etc.  Sometimes you will be asked to press the corresponding key when there isn't one, such as on the SNES-style pad. Just hold any button down for a few seconds in that case and it will skip that input.

  6. You will now find yourself in Emulation Station, and your RetroPi setup is done.  The screen will look similar to this...

  7. Even though your finished with setting up the RetroPi software, you don't have any games yet!  For acquiring games (a.k.a. - ROMs), I'm going to re-post this description from, as it is both helpful and amusing...

  8. The emulators don't come with games pre-installed. You'll have to therefore find the games yourself.

    This is where it gets a bit dodgy when it comes to copyright.

    If you don't already own a game, downloading and installing a ROM on Retropie is 99.9 per cent of the time illegal. That's why we're not going to actively tell you to go and download classic SNES, NES, Mega Drive or other console games from the past. We will though point you to some online resources that might have them available for download and then you can decide whether you want to or not.

    One excellent site for ROM files is Emuparadise. It has a vast number of ROMs and ISO files for many of the consoles and computers supported by Retropie, including Super Nintendo, NES, N64 and many more, even PSOne games.

    Once you've downloaded ROMs onto your PC you need to transfer them onto the Raspberry Pi itself and you'll need a USB memory stick for that. It's actually a doddle to do and here's how:

    1. Insert a USB stick (formatted to FAT32) into a spare port on your PC or Mac.
    2. Create a folder on the stick called "retropie" (without the quotation marks).
    3. Remove the stick from your computer.
    4. Insert the stick into one of the spare ports on your Raspberry Pi and wait for a while. This is because Retropie is creating the correct folder system on the stick that it needs to recognise ROMs.
    5. Remove it from the Raspberry Pi.
    6. Insert it back into your computer's USB port and you'll see that there are are folders for all the major different console and computer types inside "retropie/roms/".
    7. Just add the relevant ROMs into the respective console or computer folder.
    8. Unplug the stick from your computer and plug it back into your Raspberry Pi.
    9. You'll need to wait for the Pi to recognise all of the ROMs and it can take quite a while depending on how many you have.
    10. Refresh EmulationStation by hitting "F4" on your keyboard or through the start menu.
    11. The games should be available under the logo for each console or computer.
    12. Bingo.

    We've actually found that this process can take a while to complete for the ROMs to be ready and playable. You might also find some ROMs just won't work. Not all the emulators are perfect and the older the games machine, the more likely they will work properly.

You're all finished.  Without needing the USB stick anymore, in the future, anytime you simply plug in your Raspberry Pi to power it on, it will automatically launch Emulation Station, displaying whatever gaming systems you have games for, and when you select one, it will list all of your games for you to actually play.

Enjoy  :-)


Wednesday, August 09, 2017

Dissenting Speech and Sexist Culture in the Tech Industry...

Last Friday, a Google software engineer named James Damore sent out a memo to other employees within the company where he argued that the gender gap exists in the tech industry, not because women face bias and discrimination in the workplace, but because of inherent psychological differences between men and women.

His 10-page missive titled "Google's Ideological Echo Chamber" makes a lot of contentious statements - that women are more neurotic and less stress-tolerant than men; that women are less likely to pursue status than men; that women are less interested in the “systematizing” work of programming; and more.

He was fired three days later.

At issue, though, is not only the extent to which sexism pervades in the culture of Google and Silicon Valley more generally, but also the company's decision to fire Mr. Damore.  Supporters of women in tech applauded the move (and several female employees at Google had threatened to quit if no action were taken), but others have criticized the firing as silencing an employee for speaking their mind.  As Nick Wingfield of the New York Times described it, to critics this has become "a potent symbol of the tech industry's intolerance of ideological diversity".

First of all, I strongly recommend that you actually read the memo for yourself.  Have an opinion that's actually an informed opinion.  That said, my own thoughts on reading it are that Damore makes an unbelievable amount of wrong-headed assumptions that are based on no facts whatsoever.  Take these whoppers as only a few examples...

  • Women, on average, have more openness directed towards feelings and aesthetics rather than ideas. Women generally also have a stronger interest in people rather than things, relative to men.
  • These two differences in part explain why women relatively prefer jobs in social or artistic areas.
  • Women, on average, have more neuroticism (higher anxiety, lower stress tolerance). This may contribute to the lower number of women in high stress jobs.
  • Men are more motivated by status. Status is the primary metric that men are judged on, pushing many men into higher paying, less satisfying jobs for the status that they entail.

It's rather mind-blowing that such pearls of wisdom were conceived of by a college-educated man in 2017, and not for the purpose of an essay on 19th century Victorian England.  And arguing this is all the result of biology makes it that much more absurd.  The inference here is that women are not even capable of succeeding in tech.  How anyone can convincingly defend the content of Damore's message is beyond me.

Moving on, the issue over how Google handled the situation is more complicated.  Ideological diversity should indeed be valued, but it also has its limits.  In response to those who cite the First Amendment, let's remember that Damore's speech was not presented in a public space, but as an employee of a business in a private space, so the First Amendment doesn't legally apply.  Perhaps it would be constructive to think of alternative ways by which Google could have handled the situation differently.  Is there any middle ground between firing Damore versus keeping him on the payroll indefinitely?  Of course there is, not only in the form of required diversity training but with punitive or probationary measures as well.  They could have gotten creative, but they obviously didn't see the need.  To what extent that in itself is a problem, we can all have our own reasonable opinions.

However, I am reminded of the story of Al Campanis, who as general manager of the Los Angeles Dodgers in 1987, was interviewed on television by Ted Koppel on the 40th anniversary of Jackie Robinson breaking baseball's color barrier.  The interview went like this...

KOPPEL:  Why is it that there are no black managers, no black general managers, no black owners?... Is there still that much prejudice in baseball today?

CAMPANIS: No, I don't believe it's prejudice. I truly believe that they may not have some of the necessities to be, let's say, a field manager, or perhaps a general manager.

KOPPEL: Do you really believe that?

CAMPANIS: Well, I don't say that all of them, but they certainly are short.  How many quarterbacks do you have? How many pitchers do you have that are black?

KOPPEL: Yeah, but I mean, I gotta tell you, that sounds like the same kind of garbage we were hearing 40 years ago about players...

CAMPANIS: No, it's not -- it's not garbage, Mr. Koppel, because I played on a college team, and the center fielder was black, and the backfield at NYU, with a fullback who was black, never knew the difference, whether he was black or white, we were teammates. So, it just might just be -- why are black men, or black people, not good swimmers? Because they don't have the buoyancy.

KOPPEL: Oh, I don't -- I don't -- it may just be that they don't have access to all the country clubs and the pools. But I'll tell you what, let's take a break, and we'll continue our discussion in a moment.

The Dodgers fired Campanis the next day.  But in perhaps an instructive twist, after the Dodgers responded by creating an assistant for minority affairs, that new assistant, Harry Edwards, hired Campanis back.  When asked why, he responded that we need the Al Campanis' of the world in order to know how those with prejudices were thinking.


Friday, July 28, 2017

The Bitcoin Fork as the Currency's Great Existential Moment...

On August 1st, the world's favorite cryptocurrency, Bitcoin, is likely to split into two.  This is a seminal moment that we get to see play out right before our eyes, and no one is quite sure what to expect.

Here's an explanation... right now, despite 1 Bitcoin being worth about $2700, they are hardly ever being used in actual transactions.  The Bitcoin system's infrastructure can only handle 6 transactions per second (as opposed to Visa being able to handle 1600 transactions per second).  It's slow to the point of being virtually unusable in real-life retail situations.

To fix this problem, there are two prevailing ideas.  The first idea, supported by both the organization and community that manages Bitcoin's underlying software, is called the "soft fork" (also know more formally as SegWit, BIP 148, and UASF).  Without getting overly technical, the soft fork would take the Signature or "Witness" data in each transaction and move it towards the end of a transaction block in a separate structure from the data of the transaction itself.

The pros of the soft fork are that it would make transactions much faster (thanks to the blocks being structured more efficiently), transaction fees would go down, and a Lightning Network would be created that allows micropayments (like you paying for coffee at Starbucks) to be processed instantaneously without any fee.  The largest con of the soft fork is that it is widely seen as only a temporary fix, and that a hard fork will still be necessary in the future. So basically they're just kicking the can down the road to a time when it'll be even more problematic.

Speaking of which, the second idea, supported primarily by Bitcoin miners, is called the "hard fork" (also known as SegWit 2x, MASF, BIP 91, and UAHF).  This hard fork would also re-structure the "Witness" data, but would go a big step further and additionally increase the actual block size from 1MB to 2MB.

The pros of the hard fork are that it would also make transactions much faster, and would do so on a more permanent, or at least long-term, basis.  But the pros and cons of the hard fork start to blur because of the fact that the hard fork will not be backwards-compatible.  Both the hardware and software required for mining needs to be replaced - and since not every miner will have the resources to do so, this will lead to many getting out of the market and thus higher mining rewards with less competition.  That's a big con for some but a big pro for others.

And to clarify the terminology, after the split next week, the soft fork will be called Bitcoin Core and will use the ticker BTC, while the hard fork will be called Bitcoin Cash and use the ticker BCC.

What's an investor to do?  The first priority is to safeguard your money.  Many of the biggest exchanges have gone on record saying that they will not support Bitcoin Cash.  That should certainly be a significant consideration.  Also, in the immediate term, some advisors like Mohit Mamoria are recommending that investors "not make any transactions a few days before and after August 1, 2017.  Because of the fork, you might lose your Bitcoins into the thin air.  After all, Bitcoins are nothing but records of transactions.  If your transaction doesn’t get recorded by either of the chains, they will be gone forever.  Poof."

Speaking with one Bitcoin trader, Keith E,. who is a member of the Rational Investor Community, his plan is to keep 15% of his capital in Bitcoin, and on an exchange that supports the fork.  He withdrew 50% of his capital in Bitcoin recently at $2814, and placed the rest in Altcoins.  His reasoning...

"I have concerns on how this will play out.  As a trader, when in doubt, get out.  If you are a crypto trader, the only reason to keep all your assets in Bitcoin [right now] is greed.  Bulls and bears make money; pigs gets slaughtered.

"Also, the only reason charts fail on technical trading is when there is a huge fundamental driver.  I don't think it gets more fundamental than a hard fork in Bitcoin."

Perhaps the most interesting aspect of this fork is the decision-making processes that led to this existential moment.  The Bitcoin system was specifically designed with an open architecture to make forks like this, not only possible, but encouraged.  Let there be as many variants or offshoots as people want to create - the ideology goes - and what survives will be based on whichever of those variants people actually choose to adopt.  This is the rough consensus model of governance in action, which has served the larger tech community so well in the past.  However, the stakes are a bit different when rough consensus is applied to technical standards-setting versus when it is applied to the financial sector.  Waiting on the sidelines to see how things ultimately shake out lends itself to a different sense of urgency and time horizon.


Friday, July 21, 2017

Threading the Needle with E-Textiles...

Here at the WiTNY Summer Guild, I just had a chance to sit in on an e-textile workshop.  E-textiles are fabrics that have embedded electronic circuitry, and are being manufactured more and more to include sensors that can regulate temperature, change the color of the garment, display lighting, and more.

The E-Textile Lounge is a great place to start exploring some of the possibilities, and to immerse yourself in a starter project.  Personally, I'm a fan of this easy-to-DIY Virtual Reality Glove...

In a broader context, e-textiles are also being used to help teach basic Computer Science principles and the logic of circuitry, and some scholarship has begun on the best ways to incorporate e-textiles into curriculum at both the college and K-12 levels.

So what I produced at this workshop was the following children's decorative bracelet.  When you snap the two ends together, an LED light is triggered, and the thread itself is the conductor.  Try not to be too overwhelmed with its amazingness.

Seriously though, I found that the actual skill of sewing was more of a challenge than the electronic component itself.  And therein may lie the lesson... that the high-tech push into a traditional space like fabric and garments may actually exacerbate the demand for old-world skills like sewing and even, despite my best efforts, threading a needle.

A video tutorial by my pal, Diane Levitt...


Thursday, April 06, 2017

The Rise of of Bot-based Collective Blocklists on Twitter...

Yes, it's a mouthful, but there is an emerging trend known as Bot-based Collective Blocklists that might already be influencing your social media feeds.

Here's the idea.  In networked publics such as Twitter, harassment campaigns are often launched by groups to intimidate, insult, and threaten specific targeted users.  From the perspective of democratic discourse, such harassment campaigns can cause a severe chilling effect on speech in these spaces, and typically it is up to the targeted individual to block their harassers one account at a time - which can be an overwhelming task, if not completely impractical.

As described by R. Stuart Geiger, one solution that has emerged to counter these harassment campaigns is known as collective blocklists.  Basically, instead of each individual having to block their many harassers, a community of such individuals can add their harassers' information to a blocklist that is then shared with, and can subscribed to by, the entire community.  It's a crowdsourced mechanism for curating the blocklist and, using the Twitter API, free services like will incorporate that blocklist into the filters for each member of the community's individual Twitter account.

On the surface, this may seem like a great idea, however it's a bit more complicated.  Some blockbot software curates its list of harassers by a single person, others use community-curated blocklists, and still others use algorithmically-generated blocklists.  And here's where it's problematic.  The algorithms used to block people are predictive and situated within a particular socio-technical context.

When filtering and gatekeeping is automated by software agents running predictive models in such a bottom-up manner, it lends itself to numerous potential pitfalls.  Any exclusive community is made capable of blocking any individual user types for any reason - and this would be structurally embedded in the source code itself.  As a consequence, voices may be silenced in networked publics solely for the reason that a community's algorithm predicted that their voices would not be well-received.  This is what Geiger refers to as "automated discrimination".

Yes, these bot-based collective blocklists can provide an immensely valuable public service by blocking harassment campaigns, but they could just as easily be used (and misused) to silence contrarian viewpoints and, in turn, further transform our social media feeds into mere echo chambers that reinforce social and political biases.

It's quite the double-edged sword, which is fine, but needs to be recognized as such.


Wednesday, March 22, 2017

Designing Democratic Spaces Online Without Allowing Trolls To Take Over...

Jennifer Forestal wrote an interesting article in the most recent issue of the American Political Science Review titled, "The Architecture of Political Spaces: Trolls, Digital Media, and Deweyan Democracy".  In it, she highlights the importance of how website architectures can either invite or prevent different types of political activities by creating different types of spaces.

So if the challenge is to create online spaces that encourage democratic discourse, then the problem is trolling. As an example, Forestal describes how, in August 2014, Jezebel - a site dedicated to "what contemporary women want to talk about" - had its comment section overwhelmed with images of violent pornography posted by anonymous users.

In Forestal's assessment, this was made possible by the open architectural design (through code) of the site's commenting platform, Kinja, which allowed anyone to create an account and comment anonymously as often as they wished.  Also, by granting each commenter the power to be moderator of any ensuing discussions, each discussion thread was effectively treated as a unique and exclusive event run by the thread's founder, rather than conceiving of all of the threads on the site together as part of a single collective, or community, enterprise.  Thus, trolls were able to proliferate.

Ultimately, Kinja was changed so that comments were divided into "pending" vs. "approved" categories - expanding its boundaries of approved commenters to include users of the website with "a long history of demonstrated good work".

Here's the thing... this redesign of Kinja's architecture kept the conversation "open", however it added a layer of editorial control.  Some might say it created an elite class of gatekeepers - done intentionally and by design.  Think of it as having created a Superdelegate system - a mechanism for the party establishment to check the impulses of the rank-and-file.

Without placing any value-judgments, the big idea here is something that myself and others have written about extensively - that software architectures absolutely structure people's behavior in cyberspace.

Forestal recommends that, in order to avoid trolls, online democratic communities need to be designed as "small" spaces, with stable members, and yet still be "flexible" enough to be responsive to change so as to prevent gated communities that discourage new experiences and encounters.  But this is not the most revealing conclusion; in fact, it's been the holy grail that designers have been pursuing for years but no one has quite figured out how to pull off.  The main contribution of Forestal's piece, then, is to frame this familiar challenge in a better theoretical context.


Thursday, March 16, 2017

How to turn your Raspberry Pi into a Web Server...

Many "first" Raspberry Pi projects are actually overly ambitious and complicated for beginners.  Well, here's one that is quick, easy, and free.  Assuming you've already set up your Raspberry Pi, just follow these steps to turn your Pi into a fully functional Web Server and host your own website.

(You will be entering these commands in the Terminal window.)

1. Install updates.

sudo apt-get update && sudo apt-get upgrade

2.  Install the Apache Web Server and PHP

sudo apt-get install apache2 php5 libapache2-mod-php5

3.  Then restart the server with:

sudo service apache2 restart

That's pretty much it. You can view the default web page by entering your Pi's local IP address in the browser (something like

Once you set up port forwarding on your router, you can then view your web page with the IP address that the rest of the world will use to reach your site. To determine this, just Google "WhatsMyIP".

In the future, you'll use this as the Pi's directory for your website:  /var/www/html

One final, but very important, thing....  In order to make any changes to your "html" directory, you'll need to change its permissions.  Open the Terminal, navigate to the "www" directory, and enter this command to change the owner from "root" to "pi" (or your username):

sudo chown pi: html

You can check out my Raspberry Pi-powered website at:

If publishing websites was hard, the Internet wouldn't be what it is.


Thursday, March 02, 2017

Review - Binary Bullets: The Ethics of Cyberwarfare

The following is my book review of Binary Bullets: The Ethics of Cyberwarfare. This review is scheduled to be published in the forthcoming issue of Perspectives on Politics.

In a volume of rich ambition and tackling an important area of rapidly growing geopolitical significance, Allhoff, Henschke, Strawser, and their contributors have overwhelmingly succeeded in presenting a current and comprehensive analysis of the ethics of cyberwarfare and introduced numerous theoretical and conceptual models for analysis that will guide the field for years to come.

Allhoff, Henschke, and Strawser begin by raising the definitional problem that often plagues studies of cyberwarfare – to what extent is cyberwarfare even warfare at all?  The editors immediately address this by defining cyberwar as “an act of violence conducted by, or targeted at, information communication technologies intended to compel our opponent to fulfill our will” (p. 3). Throughout the volume, to its credit, the problem of definition is acknowledged repeatedly, although the editors do an excellent job of making sure that we do not get lost in it.

The first section takes a norms-based approach and constructs its ethical basis more on what’s already occurring in cases of cyberwarfare and in the realm of international law than how ethics normatively ought to be going forward. George R. Lucas Jr. argues convincingly that a soft-law approach (“best practices”) is more likely to be accepted than formal legislation because norms emerge from practice rather than through stipulative laws and regulations that are imposed externally. Especially at the international level where cyberwarfare is carried out by nation-states, the limitations of international law seem particularly pronounced in the borderless cyber realm, and thus there is not much practical alternative.  In contrast to this soft-law approach, however, Michael N. Schmitt and Liis Vihul examine international legal norms arguing that new treaties or customary law norms to govern cyberconflicts will find a range of opposition, thus evolving existing international law is “the more likely near-term prospect” (p. 52).  Randall R. Dipert then segues nicely into the next section by arguing that just-war theory should apply if the effects of cyberwarfare do rise to the level of harm produced in traditional warfare. Perhaps Dipert’s most interesting idea is his reexamination of what constitutes “arms”. If weaponized malware like Stuxnet is based on algorithms, and algorithms are really just ideas, then can we really consider ideas to be, literally, weapons, perhaps even to be bought and traded as military commodities, since they can demonstrably be used for harm?

Interestingly, while the Tallinn Manual is harshly critiqued by multiple authors here and called “a spectacular failure”, those who are bold enough to explicitly propose ethical guidelines for cyberwarfare tend not to veer far from Tallinn’s principles – namely, that cyberattacks shouldn’t be directed against civilians, should only be directed at military targets in a way that minimizes collateral damage, the principle of equivalent harm, etc.

The second section takes a just-war approach. Can just war theory accommodate developments in cyberwarfare, or do these new ways of fighting render its application obsolete? David Whetham draws the analogy to soldiers spreading out over an enemy’s territory to plunder and destroy everything in their path, though is careful to state that cyberwar is not real war at all. Ryan Jenkins follows by arguing that cyberwar can be Ideal War where states can direct their force discriminately against military targets, minimizing non-combatant and collateral damage, and proportionally. Brian Orend then presents a thoughtful and important discussion on the ethical considerations informing “what justice and law should require of good-faith actors in the aftermath of cyberwar” (p. 116). He raises critical issues that often go overlooked, and to his credit, offers a constructive way forward.

The third section explores the ethos of cyberwarfare.  Perhaps the most intriguing chapter in the volume is Matthew Beard’s The Code of the Cyberwarrior. He draws a comparison to less formal military codes of honor – “the warrior code” – akin to “what it is to be a Marine”. Cyberwarriors require a self-policing and similar code to help cultivate morally good conduct and develop a normative identity. Beard’s arguments stand out for focusing more on individual ethics of the cyberwarrior, rather than those of the nation-state, and correspondingly for ethics being enforced through peer-based social pressures rather than international law or other legal norms. The reader is left wanting to know more about the cultural anthropology of the cyberwarrior, and about that of hackers and hacktivists for that matter, but Beard maintains the distinction between these different types of cyberoperatives.

This focus on the individuals involved in cyberwarfare is coupled nicely with the chapters by David Danks and Joseph H. Danks who call for “bringing humans back in the loop” (p. 178) in formulating our ethical understanding in such a highly technical field, as well as that by Daphna Canetti, Michael L. Gross, and Israel Waismel-Manor whose analysis suggests the very real psychological harm that cyberwarfare can cause among its participants. These contributions serve as valuable reminders that, in all matters of cyberspace, it is still real people who are involved – carrying out the operations and feeling their effects.

Finally, the volume closes with a look at cyberespionage, the role of deception, and privacy. Heather M. Roff is the first to question whether cyberoperations geared towards deception of the enemy fall on the side of “permissible ruses” or are impermissible acts of perfidy, eroding the levels of trust between enemies that would undermine peace negotiations and contribute to greater international instability. Seumas Miller then presents a new term - “covert political cyberaction” – that addresses acts that are not quite cyberwar and not quite cybercrime, but rather are the “dirty” actions that tend to be harmful and unlawful but pursued to achieve a greater good. Michael Skerker adds the case of cyberespionage and, specifically, the ethics involved in government-sponsored data collection. He argues that the collection of metadata through automated keyword searches and data mining techniques poses a legitimate threat to “the autonomy of inhabitants of liberal states” (p. 251), and proposes a moral standard to determine when such “coercive state actions” are justified.

Many of the authors cite the same few examples – Estonia 2007, Israel-Syria 2007, Russia-Georgia 2008, Stuxnet, etc.) – and these all focus on nation-state behavior which, understandably, is part of how cyberwarfare is commonly defined.  But what of rogue actors, cyberterrorists, hacktivists, etc. Would the recent case of Russian operatives hacking into the Democratic National Committee’s email system to publicly reveal private political communications in the weeks leading up to an election be considered cyberwarfare? If so, how applicable are the prescribed ethical principles in the absence (or redefinition) of “civilian targets” or “collateral damage”? If not, is that an indication that conceiving of cyberwarfare in strictly military terms between nation-state actors is too limiting and thereby less relevant?

Binary Bullets is a thorough and comprehensive presentation of many of the ethical challenges raised by the prospect of cyberwarfare and should be read by anyone not only interested in ethical philosophy but also in international law, military strategy, and the politics of technology. The volume’s breadth of content and perspectives is sure to greatly expand one’s understanding of both evolving ethical considerations in technological spaces and of cyberwarfare itself.


Friday, February 10, 2017

President Trump and the Demise of Net Neutrailty...

Regardless of which political echo chamber you prefer, your social media news feed is likely overtaken with stories and opinions related to President Trump's first few weeks in office. But barely noticed has been, perhaps, the single most consequential and important public policy development of all... the end of the Internet as we know it.

I speak, of course, of Net Neutrality. Readers of The Nerfherder are well aware that I've been trying to raise awareness about this issue for 15 years (!) now.

Here's what you need to know.  The Internet that you've known your entire life has always been neutral. We've had Net Neutrality all this time. It's known as the First Amendment of the Internet, and it's a legal principle that states that all data must be treated equally in terms of how it's routed across the Internet's infrastructure. Whether you're trying to reach Google's website, or a college student's humble little blog, you're going to get there, and in a way that's the same regardless of how well capitalized the destination site happens to be.

That's all about to change, and with hardly anyone saying boo about it.

The telecoms have long wanted to do away with this system. They've lobbied heavily to be able to create an "EZ-Pass toll lane" for Internet traffic, whereby companies and individuals would have to pay untold fortunes in order to have people reach their websites faster, and every other website would be relegated to the sidelines. Scholars have long pointed out the immediate economic impact this would have on startups, entrepreneurship, and innovation - not to mention how it would greatly impair the ability of individuals to express their free speech in a manner where people might actually be able to view what they post.

If you don't have the resources of Google or Microsoft to pay off the telecoms, good luck to you.

During the transition period, President-elect Trump signaled his policy preference against Net Neutrality by naming Jeff Eisenach and Mark Jamison to oversee hiring and policy at the FCC (the agency that sets Net Neutrality policy). These men have histories of lobbying on behalf of the large telecoms and favoring more mergers within the telecom sector (reducing competition).

Then, once in office, President Trump appointed Ajit Pai as the new Chairman of the FCC. In just his first two weeks in office, as reported by the New York Times, Pai has aggressively been assaulting various Net Neutrality rules - stopping nine companies from providing discounted high-speed internet service to low-income individuals, not allowing a scheduled vote to take place that would have overhauled the pay-TV settop box market, and, perhaps most importantly, closed an investigation into whether AT&T, Verizon, and T-Mobile violated the law by giving preferential treatment to some websites and web services over others. This is the definition of violating Net Neutrality, and the FCC Chairman signaled this week that when telecoms do so, there will be no consequences.

Somehow, some way, this issue has become very partisan over the last decade. It shouldn't be.  Republicans ought to support an Internet which values free speech over censorship, which values entrepreneurship and innovation, and encourages a thriving competitive free market. As soon as you get rid of Net Neutrality, the small handful of giant telecoms will have the ability to decide what Internet content people will likely see and which web services people will likely use.

Unfortunately, the telecoms have lobbied for 20 years and have contributed millions of dollars to members of Congress (from both parties) in order to frame the issue as one where "government shouldn't regulate the Internet". But here's the problem... regardless of which side of the debate you support, regulation is the end result. If you are against Net Neutrality, the giant telecoms will regulate how websites operate and what content people will be able to access; meanwhile, if you are pro-neutrality, you're regulating the telcoms and giving a structural advantage to the websites. Either way, you're advocating for the selection of certain winners and losers. I would argue that the "free market" isn't necessarily being served by either, so the question is would you rather have a free marketplace for the telecoms or for cyberspace? Where is competition most likely to occur?

During the presidential transition period, I gave an interview urging people not to be alarmists, though when asked if Net Neutrality was under threat and should supporters be worried, I replied absolutely. The actions of the new FCC Chairman have now made it likely that the end of the Internet as we know it is only weeks away.


Tuesday, January 31, 2017

I just got my Raspberry Pi. Now what?

I recently bought the Raspberry Pi 3 Model B. When it showed up at my doorstep I had no idea how to get started. Most online tutorials jumped straight into beginner projects, but you need to set up and configure the Pi before you can do anything with it. So here goes.

The Raspberry Pi literally comes as just a motherboard, so most likely you'll want to get it as part of a Raspberry Pi "kit" that includes a charger, cables, protective case, and SD Card. Don't spend more than $50 total.

It will look like this...

Now, before we can do anything with the Pi, we have to give it an operating system. For Windows users, download the free Win32DiskImager, and unzip it to your computer. Then, download Raspbian which is the official operating system for Raspberry Pi, and unzip that to your computer as well.

The next step is to open that Disk Imager program, click the blue folder icon, go find your Raspbian image file, and click to "Write" it to your SD card (which you should've put in your computer's SD card reader).

Believe it or not, you're almost finished. The only remaining task is to put that SD card into your Raspberry Pi and turn the Pi on.

Here is definitely something important to be aware of, though.  The first time you break out your Raspberry Pi and want to install your Raspbian operating system, you MUST connect your Pi to a monitor, mouse, and keyboard.  You'll only need to do this the first time you use it, but it's an unavoidable step.  You can't use a laptop either.  Without realizing this, after getting my Pi in the mail, I then had to wait a few days, bring the Pi into my office at work, take over a colleague's USB mouse and keyboard for an hour, and finally (and this was the real challenge) I had to find a monitor that not only used an HDMI cable but also had the right adapter for that HDMI cable (the adapter with the cord didn't work for some still-unknown reason).

But here's the good news... even though hooking the Pi up to a mouse, keyboard and monitor the first time can be a pain, once they're all connected and you simply power it on, the operating system automatically loads, and just like that you have a new pocket-sized computer.

You're totally finished and ready to experiment with all of those beginner Raspberry Pi projects that you see all over the Web.

One last thing, which is optional but highly recommended.  As of now, your Pi is 100% ready to use, but you still need it connected to a keyboard, mouse, and monitor whenever you want to do something with it.  Why not make it "headless"?

To make your Pi headless, you just need to set up some type of Remote Desktop program so that your laptop or main computer can take control of your Pi when you want it to and give you an interface.

I recommend the VNC Viewer.  While your Pi is still hooked up to a monitor, boot it up. What's nice is that the VNC Server software is already built in to Raspbian, so all you need to do is click on Menu > Preferences > Raspberry Pi Configuration > Interfaces, and then click to Enable VNC.  You should also double-click on the VNC Server icon on the top-right of the desktop screen and take note of your IP address. Reboot your Pi. You can now disconnect the mouse, keyboard, and monitor.

Your Pi is headless, so when you want to control it from your laptop or main computer, download and install the VNC Viewer software linked to above, then simply click to Create New Connection and enter your Raspberry Pi's IP address.  It will display a nice Raspbian interface for your Pi so you'll never need to connect those external pieces of hardware again.

You're off to the races!  My first few projects... 1) Turn the Pi into a Web server, 2) turn it into a Minecraft gaming server, 3) turn it into a RetroPi gaming console, and 4) turn it into a Kodi device for use with my TV.


The Privacy Paradox Podcast...

This morning I heard about The Privacy Paradox on NPR radio. It's an "interactive podcast", which as best as I can tell means that after each new podcast episode is released, the listener is directed to complete a series of challenges, and then the next podcast episode will review the listeners' collective results.

In this case, there will be 5 challenges that you can complete with the goal of "taking back your digital identity".  The challenges include finding out what your phone is tracking about you, discovering how algorithms learn about and then sell your identity, reclaiming your privacy, and more.

I don't know if this is already the case, but the thought occurs to me that for a podcast to truly be interactive, rather than having pre-determined content, users' results could help decide what the next round of content ought to be.  For instance, the podcast's guest speakers could be selected based on the privacy findings of the users' first challenge on smartphone surveillance.

Another intriguing aspect of this project is the Privacy Personality Quiz to see if you are a Shrugger, Realist, or Believer.

For the record, I was notified that I'm a Realist.

This seems like a useful project that also seems like it could be both entertaining and fun.  Who wants to join me and then have a discussion?  You can choose whether it would be private or public, of course.


Friday, January 27, 2017

How to Install Minecraft Mods...

When you're ready to step your game up to the next level in Minecraft, it may be time to experiment with "mods".  Mods are not part of the official Minecraft game, but are instead created by other players.  They can make it possible to build unique items, interact with new kinds of creatures like dinosaurs, and even change the game's graphics to 3D. Basically, anything is possible through mods, including changing the basic rules of the Minecraft game itself.

Go explore what's out there at,, and  Here are the steps for installing new mods...

1.  Download and install Minecraft Forge.  This is an API that lets you install mods for Minecraft.  Go to the Minecraft Forge downloads page, click on the most version of Minecraft you have (for example, 1.11.2), and on the next page click "Installer-win".

2.  Run the installer file that you just downloaded.  Make sure that "Install Client" is selected and click "OK".

3.  Now start up Minecraft.  You should notice that in the bottom-left corner, in the drop-down "Profile" menu, you will see a new profile labelled "forge".  Select it and click to "Play".  After a little bit of updating, the Minecraft Game Menu will appear and display a new button labelled "Mods".

4.  At this point, Forge is successfully installed and you are ready to play mods.  However you still need to go get a few.  You can find some you like from  For this example, I'm going to get the JurassiCraft Mod to play in a world with dinosaurs.

5. First download the LLibrary jar file (this won't be necessary for all, or even most, mods, but it is required for JurassiCraft), then download the JurassiCraft jar file (the file links are towards the bottom of these two pages).  Save them both into your Minecraft "mods" folder.

  • If you don't know where your "mods" folder is, do the following...
  • Start Minecraft as if you were about to play a new game (in other words, open the "launcher").
  • In the bottom-left corner, underneath your profile name, click the button labelled "Edit Profile".
  • Take a look  in the field labelled "Game Directory". That is the location of your saves folder. Write it down or remember it. For this example, you can see that my folder is located in "C:\Users\RobbieD\AppData\Roaming\.minecraft".

6. You're finished! When you launch Minecraft and click your new "Mods" button, you should see your new mod (ex. - "JurassiCraft") appear on the left. Play a new game with this mod and you'll have lots of new non-standard features to the game.  Like dinosaurs.