Friday, August 19, 2016

This post is not about politics

This post is not about politics. 

But it is about the election, and it’s important. Let me state it bluntly: the increasing computerization of our electoral process is putting our democracy at risk. The process as it exists now is broken and we need to bring back a foolproof paper balloting system that everyone can understand and trust.

Look at two of the many story lines of this year’s election. On the one hand, we have a candidate who has repeatedly claimed that the political system and the election is “rigged”. He’s already predicting that if he loses it will be because the other side cheated. Put aside your opinion of the validity of this claim: the fact is that he may, and probably will, seek to delegitimize the likely result of the election, an idea that will undoubtedly resonate with many of his supporters. An election mistrusted by a large portion of an angry electorate does not bode well for good and peaceful governance of our nation after election day.

Now consider the second story line: the very disturbing extent to which many of our government systems have been penetrated by America’s adversaries. Whether it was the Russians, or WikiLeaks, or another intermediary, the fact is that some very important systems that were (or should have been) well-guarded were compromised and hacked. This includes the most protected servers of the NSA. These stories do nothing to instill confidence in America’s systems amongst the general public. Anyone paying attention would conclude that no system is hack-proof to a determined, skilled and well-funded adversary. I agree that this is true, don’t you?

Now look at the way elections are conducted across America. It’s a vast patchwork of locally-managed, often shoestring operations incorporating various degrees of computer and software tools. Some touchscreens here, some Windows 2000 operating systems there, you name it, it’s out there. Security? Maybe. World-class IT security? Don’t bet on it. The men in black at the NSA (who can’t even guarantee the security of their own systems) are not managing the security at the local polling place. The systems out there are vulnerable and we don’t even know how bad it is.

We have been waiting many years for the election process to mature into a secure, stable, uniform process that takes advantage of computerized tools; one that would be secure, easy to use, fast and auditable. And above all, trustworthy.

It has not happened. It has only gotten worse. I have followed this subject for a long time as part of my course in IT Ethics at Immaculata University. The systems continue to age, break down and expose their flaws, while vendors and local electoral officials fight a losing rearguard action to keep up. The systems have proven to be hackable and failure prone over and over. In many cases, the lack of auditable paper trails have resulted in votes being lost again and again. This is not speculation, but reported fact. And now we face the prospect of foreign adversaries with an interest in meddling in our election, coupled with an angry faction ready to believe that the whole process is crooked.

A recent op-ed piece in the New York Times by Zeynep Tufekci, called Bring Back Paper Ballots, makes a strong case against our broken system and urges us to return to a paper balloting system that is impossible to hack, fully capable of being audited and re-verified, baby-simple to use and worthy of our trust. Among the quotes in the piece is this one from Matthew Green, a specialist in cryptology and cybersecurity at Johns Hopkins University (no Luddite he): “There is only one way to protect the voting systems from a nation-state funded cyberattack: Use paper.”

I have been convinced by Tufekci’s argument and I agree that our electoral process is one place where computerization will work not to our benefit but to our detriment. As an IT guy it’s hard for me to admit that, but as a citizen, it’s a no-brainer.

Paper based systems need not be primitive or cumbersome. My county (Chester County, PA) uses an Optical Mark Recognition (OMR) system, often called “fill in the bubble”. It’s simple and scanable, results are computable quickly, it’s hard to tamper with and, best of all, the paper can be saved and recounted if there’s a dispute. I think this should become the electoral standard everywhere.

We cannot allow the results of our next election, and many after that, to be put at risk and tainted by doubt and denial. We must have a process that every citizen can trust and no one can tamper with. Let’s go back to a good paper balloting process.

Monday, June 13, 2016

Who's really to blame for the web's hidden bias?

Facebook has been much in the news recently. You may have followed the story: the tech site Gizmodo quoted unnamed former Facebook contractors who said they routinely suppressed conservative viewpoints in the “trending stories” news feed. This caused many (mainly conservatives, you might expect) to raise the hue and cry about the alleged liberal bias. Facebook has officially denied any such hidden agenda.

Now I could use this blog to question why anyone gets their news from Facebook in the first place, but that would be futile. The fact is they do. According to a 2015 Pew research study, 63% of Facebook users use it to get the news. And 40% of users agree that it is “an important way to get the news.”

Or I could make comparison of liberal and conservative thinking when it comes to suspected biases in the media and society at large. (“Why is it usually the conservatives who see these conspiracies at work?” I might ask – but I won’t!)

I might also reiterate some of what the Wall Street Journal concluded, saying “…using human editors to curate trending topics inevitably introduces biases, both conscious and unconscious.” The Journal (no hidden liberal bias here!) said that in this regard Facebook operated just like any other news room.

But in a larger sense, in my opinion, we’re chasing the wrong bogeyman. Facebook is not to blame; we are. The fact is that we reveal our own prejudices and preferences with every click we make, and the internet is designed to reflect that back to us. As Frank Bruni wrote in the New York Times, in a column called “How Facebook Warps Our Worlds”, the internet is not rigged to give us a conservative or liberal bias, until we rig it that way ourselves. It is, he said “designed to give us more of the same.”

Every time we click like, follow a link, join a page or a group, we are telling the internet what we want, and the internet will give us more just like that. Just look at all the ads you see now, all chosen just for you. Every click you make determines what you’ll see next.

Google does this every time you search. It records the link you clicked on – all the links you’ve ever clicked on – and uses that knowledge to serve up search choices for you next time that more closely match your profile. It gives you what it has calculated you want. That’s part of Google’s secret sauce.

Eli Pariser wrote an eye-opening book in 2011 called the “Filter Bubble”, which described all the ways that the web’s hidden gatekeepers now build a bubble around us, all in the name of customization. We get the web experience just the way we want it, without even asking for it. More and more we’re living in a house of mirrors, in which everything we are is reflected back on us.

(By the way, I highly recommend Pariser’s TED talk about the Filter Bubble. It’s been viewed more than three and half million times and it’s worth nine minutes of your time too.)

So don’t each of us win when our likes and dislikes rule the web experience? I don’t think so. The danger is real that our minds will become increasingly narrowed by reinforcement of our opinions. The web will just continue to prove us right, in whatever we believe, from rigid political dogma, nutty conspiracy theories, prejudices of all kinds – you name it. It can’t be good to be shown only content that agrees with you.

The fact is that we need to have our beliefs challenged by differing viewpoints. And we need to be available when some serendipitous idea or story comes our way. We have to be there to see it.

I used to buy the New York Times every morning and “read” it from first page to last. I didn’t read everything, but at least I looked at it, bumping by chance into all kinds of stories I never would have sought out: cooking, travel, the chess column, whatever. Now, my news consumption is quite different. I subscribe to the Times web site and I only click on what I want to see. No serendipity here folks.

So don’t blame Facebook or some other online source for serving up a biased agenda. Look in the mirror.

Tuesday, February 16, 2016

What's that tech you're wearing?

I’ve been thinking about wearable technology lately, and I continue to see strides in this space being made, as well as much more to come on the horizon.

The potential for wearable tech has been around for a while, with small sensors, minute power devices (or the use of body heat) and the networking options for small gadgets over short ranges, like NFC. But for a long time this technology was like a solution in search of a problem. It was not until consumer applications were developed that demand reached the tipping point where these devices became a market.

Mention “wearable technology” to some people (as I have) and they react as if it’s a creepy-spooky idea. Then rattle off all the devices that surround us, from fitness trackers, smart watches, Google glasses and so on, and you’ll see the light bulb come on. Once everyday applications become ubiquitous, the idea no longer seems outlandish and consumer demand will take over.

But so far, I think, the applications have all been consumer-friendly playthings, and not something that companies see as a driver in the workplace. That is going to change. As people interact more and more with the internet of things, as our jobs rely more and more on computers and information, and the manufacturing sector is always on the brink of the next version of the “factory of the future” it can only be certain that humans in the environment will need to be wired in as well.

Consider worker safety. Every year over two million workers die in job related accidents, an astonishing number. Much effort and expense is devoted (and should be) to reducing that number. Much (although not all) of this is mandated by regulation, but more can and should be done. “Safety is really a big issue for the enterprise,” says Shawn DuBravac, chief economist for the Consumer Technology Association. “If I’m on a construction site, I’m buying them helmets, harnesses, safety gear. Why not also buy them air sensors, infrared cameras and sensored safety vests?” The costs of these devices have come down considerably, so it makes no sense not to deploy them.

Trackers like Fitbit and many others now routinely monitor heart rate, breathing and other indicators of physical health. Why not deploy them to all workers who toil in a stressful or physical environment, so that problems can be spotted before they happen? The application of this technology in professional sports should also be a natural; think of the NFL’s current concern for the effects of head trauma on its players. How hard would it be to develop sensors in helmets that could measure the effects and alert trainers to the dangers?

I would also like to see more application of sensors that can monitor the alertness of individuals who simply must not be allowed to nod off: think of pilots, truck drivers and power plant supervisors. This technology is just in its infancy and has a long way to go. Wikipedia has an article on “DriverDrowsiness Detection” and Volvo is testing a system, although not wearable, that will monitor driver alertness by detecting “how open the driver’s eyes are, whether he or she is looking forward while the vehicle is in motion, and the position and angle of the driver’s head.”

All these have the potential for harnessing machines and computers to keep people safe and improve performance. But I’d be remiss if I didn’t mention the potential downside concerns. If companies encourage – or require – workers to wear the sensors, what is done with the data? How might it be misused? Can it be deployed secretly? Who owns the data and what are the rules about sharing it? Will a worker consider these devices invasive or a violation of their physical integrity? Can workers be fired if their numbers are not up to par?

It’s also worth speculating about what new forms wearable technology may take. Once we are used to sensors in our clothing and strapped to our bodies, the next step may be something even more invasive. A startup called Chaotic Moon Studio is pushing a concept for a fitness tracker that is essentially a technology tattoo., a web site devoted to this subject describes it this way: “Tech Tats uses electroconductive ink to connect sensors pressed against the skin, which can keep an eye on body vital signs, which could include temperature or vital signs. These can be stuck anywhere on the body, making them more discreet than standard wrist-based trackers.” Can chip implants be far behind?

We have much to hope for from this technology, and much to be wary of. But there’s no doubt that we’re sure to see many more examples of wearable technology in our lives – and on our bodies – soon.

Wednesday, November 18, 2015

High tech meets high kill

Imagine an organization that’s working to secretly invent the deadliest and most insidious ways to kill people.

Their work is high-tech, cutting-edge and incredibly lethal. One project is the development of what they call hunter-killer robots: self-propelled, autonomous units, some in humanoid form, that are programmed to enter a hostile environment and seek out an adversary and kill him – these could be enemy soldiers or individual targets that the robot is programmed to seek through face recognition software. The robots act independently and once released, they’re relentless: they don’t stop or come back until they take out the target.

This organization is also working on the next generation of lethal drones: smaller, faster, more sophisticated and more deadly than today’s models. They’re small, they can swarm and act in concert, they can act without external control, they can flood a battlefield with kill shots, or act in stealth-mode, following the enemy into any space or hiding place. Some are hybrids of insects whose brains have been implanted with control technology. Once programmed, they also act autonomously. We picture today’s drones as model helicopters flying over the terrain; the next generation will act more like a vast swarm of killer bees.

A third project is called the “Man/Machine Interface” and is described as “the future of brain-computer interface technologies.” This is just what it sounds like: implanting computer chips inside the human brain. Imagine that our secret organization is already testing brain implants and a wired jack into the base of a human brain: the goal is to enhance combat performance, and to make the subject more suitable for combat and control.

By now you may be hoping that I’m talking about SPECTRE, the evil entity battling James Bond in the latest movie release of that series. Or perhaps it's some science fiction scenario that I dreamed up to challenge you to think about the future consequences of today’s trends. So sorry to dash your hopes. As you really suspect, the organization I’m describing is real, it’s in operation today and it’s working on all of the above. Every word is true. And those are just the projects we KNOW about. The ones that are classified – well, who knows?

I’ve just finished reading a fascinating new book called “The Pentagon’s Brain” by Annie Jacobsen. It’s a history of DARPA, the Defense Advanced Research Projects Agency. DARPA (originally ARPA, just for the record) has been the military’s high-tech think tank since the early years of the cold war. Their mission is to make sure that the US stays many steps ahead of its adversaries when it comes to using technology to its advantage in any present or future conflicts. They’ve had quite a track record, as I’m sure you know. They brought us the hydrogen bomb, but they also brought us the internet. They invented the GPS satellite system, but they also invented Agent Orange. Drones, biological weapons, Star Wars – DARPA has had a hand in it all. And on the whole, when you see the US’s technological superiority in every conflict, you have to say they’ve had more successes than failures. For obvious reasons, I’m glad they’re on our side.

So I was impressed by most of the book as it surveyed the agency’s history. But things became increasingly scary in the final three chapters as Jacobsen described DARPA’s current work load. The three projects described above – hunter-killer robots, drone swarms and the man/machine interface – raise serious ethical issues that the military is not willing or interested in addressing. Their mission is to make sure we can slaughter every enemy and how we do it is just a means to that end. And they don’t question the means.

Ethical concerns include the control aspect (how do we ensure all of this stays under human control, and works flawlessly?), the moral aspect (should we fight future wars against human adversaries with machines and robots that can slaughter at will, with no risk to ourselves?) and the human aspect (what does it mean to infuse technology into a human brain or body, and what are the implications for enslavement from within?). This is not idle speculation, since all of these systems and projects are in the pipeline right now. We will see them in our lifetimes.

The military-industrial complex (which President Eisenhower warned about in 1960, and which Jacobsen illuminates in the book) has the power to create a dark and nightmarish future. Extrapolating these weapons out to a logical conclusion will produce visions of The Terminator or 1984 or the ‘Borg’ from Star Trek. Consider hunter-killer robots released into the world, driven by software and operating independently. What is the guarantee that they will stay under human control, as they act autonomously to carry out their deadly mission? We in technology know all too well that no system ever works perfectly. Jacking a computer into a human brain (DARPA has tested this with lab animals and plans to use human subjects soon, if they haven’t already) will make tomorrow’s soldiers resistant to pain, immune to fear – and perfectly obedient to their masters? The scenarios are endless and they are not pretty. And I repeat that these projects are only the ones we know about. DARPA’s budget and full project list is highly classified.

Jacobsen has few suggestions on how to steer off the road we’re traveling, and I don’t either. What is essential is that an informed citizenry provide the eternal vigilance that our government requires, and that we elect leaders who will include ethical consideration in the decisions that impact our national defense. We have to protect our future as well as our country.

Here's a link to "The Pentagon's Brain" on Amazon. But support your local bookstore and buy it there.