Friday, March 10, 2017

It's time to start talking about tech addiction again

The recent snafu at the Oscars should reignite a topic that we’ve been talking about for a long time. In the early 1990s it was called internet addiction. Lately it’s been called screen addiction or tech addiction. Whatever you call it, we all know it when we see it, and some of us may see it in ourselves. I know I do.

You all know the Oscar story. An epic fail when the night’s biggest announcement, for Best Picture, was mishandled because the presenters were handed the wrong envelope (the one for best actress) by the Price Waterhouse Coopers accountants. “La La Land” was announced as the winner when the real winner was “Moonlight.”

The technology angle is this: the PwC guy backstage, Brian Cullinan, screwed up royally. Later it was learned that he was tweeting backstage on his smartphone – a photo he posted of best actress Emma Stone is all over the web. And after that it was learned that he had been “asked” ahead of time not to tweet backstage. Obviously, he had a job to do, and tweeting was not part of that. So he was told not to, yet he did. Did he not know better? Of course he did.

(Cullinan, shortly after this colossal Charlie-Foxtrot, deleted the tweet in question. Note to all: you can’t delete a tweet, once you send it to all your followers.)

Okay, mistakes happen. We all make them, although hopefully not on worldwide television. But what I see here is a good guy who wants to do his job well, yet is unable to stay off his gadget, despite being told otherwise. Hence a revival of the topic of screen addiction.

I’m now reading a fascinating new book called “Irresistible: The Rise of AddictiveTechnology and the Business of Keeping Us Hooked” by Adam Alter. It digs deep into this subject and it’s a sobering story.

Alter makes the case that this is a real addiction – a behavioral addiction, as opposed to chemical addictions that we usually think of first in association with that word – and that half of us (according to the publisher’s blurb), “would rather suffer a broken bone than a broken phone.”

This addiction has a real physical basis. According to Alter “We are engineered in such a way that as long as an experience hits the right buttons, our brains will release the neurotransmitter dopamine. We’ll get a flood of dopamine that makes us feel wonderful in the short term, though in the long term you build a tolerance and want more.”

As a parent, you may bemoan your child’s devotion (or obsession) with video games, screens, social media and the like. But when you check your smartphone at a long red light, or as the last thing before hitting the pillow at night, are you really any different? I describe myself in this example.

Dr Alter is an associate professor at the Stern School of Business at NYU and he makes a strong case for the negative consequences of this addiction, and describes vividly the relentless campaign by the tech industry to get us hooked and keep us hooked.

He was interviewed in the New York Times recently (“Why we can’t look away from our screens”) and it’s well worth reading. Among his insights are “game producers will often pretest different versions of a release to see which one is hardest to resist and which will keep your attention longest.” And “I spoke with a young man who sat in front of this computer playing a video game for 45 consecutive days. The compulsive playing had destroyed the rest of his life.”

This is a serious topic and it’s not just about our kids playing video games; we are all hooked to some extent. As with any cure, the first step is to acknowledge the problem. Only then can we regain control.

Readers of this blog know that I do not scoff at the advance of technology and innovations: on the whole I embrace them. But no addict is in control of himself, and I refuse to let others addict me. We need to be the ones in control.

Perhaps the most telling story in Alter’s book is the description of how many tech leaders have restricted the use of technology by their own kids; they know too well the dangers. And get this: Steve Jobs was the father of the iPad, but he never let his own kids use one.

Wednesday, December 28, 2016

2016 was a horrible year in tech too


I realize this blog has been quiet for a while this year. I don’t have to tell anyone that 2016 has been a unique, and uniquely stressful year. So please excuse my decision to set aside this space due to the distractions of the recent election. Discussions of technology issues seemed trivial compared to the bigger issues at stake this year. 

But if you’ll join me in wishing 2016 a NOT-fond farewell, let’s do it by looking at the year in tech. There are three tech stories of 2016 that I think loom large in importance, especially for ethical reasons. Before reading further, what would you guess they were? I’d love to know. 

As your number one tech-ethics story did you pick the proliferation of fake news? That’s the one that disturbs me the most, and plenty has been written about it. We could see it coming for a while. I remember talking in my ethics class about the removal of gatekeepers for our sources of news and content, and how this had both a good side and a bad side. You must know this by rote. The good: the removal of gate keepers gave us all a voice, crowd sourcing would provide an equal or better view of reality, we would all be publishers and the barriers would fall. Vox populi, vox dei. The flip side: no responsible party fact checking, the proliferation of hidden agendas, the crackpot fringe side-by-side with the sober mainstream. 

So we knew the dangers. What we couldn’t expect was the onslaught of purposeful disinformation and the degree to which a gullible public would be manipulated by these disreputable players. If Barnum were alive today he’d tell us that there’s a fake news consumer born every minute. The crazy fringe is no longer a fringe; it’s right in the center of our public life. Do you know people who follow fake stories and propaganda, no matter what the facts? I thought so. 

Fake news is a poison to our political discourse. Here’s the cure, and it’s one not easily achieved: a healthy dose of skepticism. Sorry if this seems simplistic, but it’s true. When encountering “news” that purports to tell some unbelievable crazy story that doesn’t ring true – even if it fits perfectly with your political view of the way things are – question it. Fact check it. Verify the source and look for confirmation. There are plenty of ways to do this on the web and I won’t even bother to point you to them. But we must learn to question the nonsense that circulates on the web before swallowing it. 

I don’t pretend to know how we establish this skepticism in our fellow citizens. Perhaps we teach our children while they can still learn. But this is what we must do. We can’t keep being fooled by those on the web who would manipulate us with easily disproved nonsense. Just one example: a child sex ring run out of a Washington DC pizza parlor. Come on, really? 

The second big tech news story of 2016? Did you pick the hacking of the US electoral process by the Russians? This is huge, dangerous – and not disputable. 

I’ve written before about the vulnerability of our election process to a determined hacker, but never has it been more obvious that we were manipulated by a concerted effort from a foreign adversary. Make no mistake: this was a carefully planned strategy to penetrate email systems, release damaging information and influence the electorate in many malicious ways, aimed at getting the Moscow-preferred candidate elected over his opponent. The Russians didn’t even have to hack the computers at the polling booths – the link in the chain we always thought was most vulnerable. Instead they hacked our heads and poured in all manner of disruption and distraction.  

And it worked. Can you imagine how this has encouraged the Russian tech masters? Can you doubt that they are already planning their moves for future US elections? 

This is tremendously concerning, especially since our president-elect shows no signs of understanding or caring about this issue. Our democracy is at risk, and only we citizens and the good people who remain in government can do something about it. Let’s hope we can keep the flame of democracy burning and find a way to turn things around next time. 

My third story? The Samsung Galaxy Note 7 – that sleek, beautifully manufactured, android masterpiece by one of the world’s biggest gadget makers – that damn smartphone that kept overheating and catching on fire! It’s an understatement to say that this was a big black eye for Samsung. Consider the huge bad press, the US government intervention, the recalls, the five generations of replacements that also caught fire, and on and on. And if you’ve flown a commercial flight recently, you know the gate crew and the flight attendants announce that the phone is banned from the plane by the FAA. Could it be any worse? 

I won’t go into the reasons for Samsung’s failure. But clearly, even the best can stumble; quality control is not something that can ever be taken for granted. In my class at Immaculata, I have the students read up on the case of the Therac-25, a notorious tech failure. It was a new radiation therapy machine that was badly designed and poorly tested that wound up killing or injuring scores of patients that had the misfortune to cross its path. I try my best to scare my young students into never being a party to a Therac-25 project. The Samsung Galaxy story tells us that tech safety is still an important issue. 

So there you have 2016 in tech: two attacks on our democracy and a firebomb in your pocket. Is it any wonder that we all feel that 2017 can’t come soon enough?


Friday, August 19, 2016

This post is not about politics

This post is not about politics. 

But it is about the election, and it’s important. Let me state it bluntly: the increasing computerization of our electoral process is putting our democracy at risk. The process as it exists now is broken and we need to bring back a foolproof paper balloting system that everyone can understand and trust.

Look at two of the many story lines of this year’s election. On the one hand, we have a candidate who has repeatedly claimed that the political system and the election is “rigged”. He’s already predicting that if he loses it will be because the other side cheated. Put aside your opinion of the validity of this claim: the fact is that he may, and probably will, seek to delegitimize the likely result of the election, an idea that will undoubtedly resonate with many of his supporters. An election mistrusted by a large portion of an angry electorate does not bode well for good and peaceful governance of our nation after election day.

Now consider the second story line: the very disturbing extent to which many of our government systems have been penetrated by America’s adversaries. Whether it was the Russians, or WikiLeaks, or another intermediary, the fact is that some very important systems that were (or should have been) well-guarded were compromised and hacked. This includes the most protected servers of the NSA. These stories do nothing to instill confidence in America’s systems amongst the general public. Anyone paying attention would conclude that no system is hack-proof to a determined, skilled and well-funded adversary. I agree that this is true, don’t you?

Now look at the way elections are conducted across America. It’s a vast patchwork of locally-managed, often shoestring operations incorporating various degrees of computer and software tools. Some touchscreens here, some Windows 2000 operating systems there, you name it, it’s out there. Security? Maybe. World-class IT security? Don’t bet on it. The men in black at the NSA (who can’t even guarantee the security of their own systems) are not managing the security at the local polling place. The systems out there are vulnerable and we don’t even know how bad it is.

We have been waiting many years for the election process to mature into a secure, stable, uniform process that takes advantage of computerized tools; one that would be secure, easy to use, fast and auditable. And above all, trustworthy.

It has not happened. It has only gotten worse. I have followed this subject for a long time as part of my course in IT Ethics at Immaculata University. The systems continue to age, break down and expose their flaws, while vendors and local electoral officials fight a losing rearguard action to keep up. The systems have proven to be hackable and failure prone over and over. In many cases, the lack of auditable paper trails have resulted in votes being lost again and again. This is not speculation, but reported fact. And now we face the prospect of foreign adversaries with an interest in meddling in our election, coupled with an angry faction ready to believe that the whole process is crooked.

A recent op-ed piece in the New York Times by Zeynep Tufekci, called Bring Back Paper Ballots, makes a strong case against our broken system and urges us to return to a paper balloting system that is impossible to hack, fully capable of being audited and re-verified, baby-simple to use and worthy of our trust. Among the quotes in the piece is this one from Matthew Green, a specialist in cryptology and cybersecurity at Johns Hopkins University (no Luddite he): “There is only one way to protect the voting systems from a nation-state funded cyberattack: Use paper.”

I have been convinced by Tufekci’s argument and I agree that our electoral process is one place where computerization will work not to our benefit but to our detriment. As an IT guy it’s hard for me to admit that, but as a citizen, it’s a no-brainer.

Paper based systems need not be primitive or cumbersome. My county (Chester County, PA) uses an Optical Mark Recognition (OMR) system, often called “fill in the bubble”. It’s simple and scanable, results are computable quickly, it’s hard to tamper with and, best of all, the paper can be saved and recounted if there’s a dispute. I think this should become the electoral standard everywhere.

We cannot allow the results of our next election, and many after that, to be put at risk and tainted by doubt and denial. We must have a process that every citizen can trust and no one can tamper with. Let’s go back to a good paper balloting process.


Monday, June 13, 2016

Who's really to blame for the web's hidden bias?


Facebook has been much in the news recently. You may have followed the story: the tech site Gizmodo quoted unnamed former Facebook contractors who said they routinely suppressed conservative viewpoints in the “trending stories” news feed. This caused many (mainly conservatives, you might expect) to raise the hue and cry about the alleged liberal bias. Facebook has officially denied any such hidden agenda.

Now I could use this blog to question why anyone gets their news from Facebook in the first place, but that would be futile. The fact is they do. According to a 2015 Pew research study, 63% of Facebook users use it to get the news. And 40% of users agree that it is “an important way to get the news.”

Or I could make comparison of liberal and conservative thinking when it comes to suspected biases in the media and society at large. (“Why is it usually the conservatives who see these conspiracies at work?” I might ask – but I won’t!)

I might also reiterate some of what the Wall Street Journal concluded, saying “…using human editors to curate trending topics inevitably introduces biases, both conscious and unconscious.” The Journal (no hidden liberal bias here!) said that in this regard Facebook operated just like any other news room.

But in a larger sense, in my opinion, we’re chasing the wrong bogeyman. Facebook is not to blame; we are. The fact is that we reveal our own prejudices and preferences with every click we make, and the internet is designed to reflect that back to us. As Frank Bruni wrote in the New York Times, in a column called “How Facebook Warps Our Worlds”, the internet is not rigged to give us a conservative or liberal bias, until we rig it that way ourselves. It is, he said “designed to give us more of the same.”

Every time we click like, follow a link, join a page or a group, we are telling the internet what we want, and the internet will give us more just like that. Just look at all the ads you see now, all chosen just for you. Every click you make determines what you’ll see next.

Google does this every time you search. It records the link you clicked on – all the links you’ve ever clicked on – and uses that knowledge to serve up search choices for you next time that more closely match your profile. It gives you what it has calculated you want. That’s part of Google’s secret sauce.

Eli Pariser wrote an eye-opening book in 2011 called the “Filter Bubble”, which described all the ways that the web’s hidden gatekeepers now build a bubble around us, all in the name of customization. We get the web experience just the way we want it, without even asking for it. More and more we’re living in a house of mirrors, in which everything we are is reflected back on us.

(By the way, I highly recommend Pariser’s TED talk about the Filter Bubble. It’s been viewed more than three and half million times and it’s worth nine minutes of your time too.)

So don’t each of us win when our likes and dislikes rule the web experience? I don’t think so. The danger is real that our minds will become increasingly narrowed by reinforcement of our opinions. The web will just continue to prove us right, in whatever we believe, from rigid political dogma, nutty conspiracy theories, prejudices of all kinds – you name it. It can’t be good to be shown only content that agrees with you.

The fact is that we need to have our beliefs challenged by differing viewpoints. And we need to be available when some serendipitous idea or story comes our way. We have to be there to see it.

I used to buy the New York Times every morning and “read” it from first page to last. I didn’t read everything, but at least I looked at it, bumping by chance into all kinds of stories I never would have sought out: cooking, travel, the chess column, whatever. Now, my news consumption is quite different. I subscribe to the Times web site and I only click on what I want to see. No serendipity here folks.

So don’t blame Facebook or some other online source for serving up a biased agenda. Look in the mirror.

Tuesday, February 16, 2016

What's that tech you're wearing?


I’ve been thinking about wearable technology lately, and I continue to see strides in this space being made, as well as much more to come on the horizon.

The potential for wearable tech has been around for a while, with small sensors, minute power devices (or the use of body heat) and the networking options for small gadgets over short ranges, like NFC. But for a long time this technology was like a solution in search of a problem. It was not until consumer applications were developed that demand reached the tipping point where these devices became a market.

Mention “wearable technology” to some people (as I have) and they react as if it’s a creepy-spooky idea. Then rattle off all the devices that surround us, from fitness trackers, smart watches, Google glasses and so on, and you’ll see the light bulb come on. Once everyday applications become ubiquitous, the idea no longer seems outlandish and consumer demand will take over.

But so far, I think, the applications have all been consumer-friendly playthings, and not something that companies see as a driver in the workplace. That is going to change. As people interact more and more with the internet of things, as our jobs rely more and more on computers and information, and the manufacturing sector is always on the brink of the next version of the “factory of the future” it can only be certain that humans in the environment will need to be wired in as well.

Consider worker safety. Every year over two million workers die in job related accidents, an astonishing number. Much effort and expense is devoted (and should be) to reducing that number. Much (although not all) of this is mandated by regulation, but more can and should be done. “Safety is really a big issue for the enterprise,” says Shawn DuBravac, chief economist for the Consumer Technology Association. “If I’m on a construction site, I’m buying them helmets, harnesses, safety gear. Why not also buy them air sensors, infrared cameras and sensored safety vests?” The costs of these devices have come down considerably, so it makes no sense not to deploy them.

Trackers like Fitbit and many others now routinely monitor heart rate, breathing and other indicators of physical health. Why not deploy them to all workers who toil in a stressful or physical environment, so that problems can be spotted before they happen? The application of this technology in professional sports should also be a natural; think of the NFL’s current concern for the effects of head trauma on its players. How hard would it be to develop sensors in helmets that could measure the effects and alert trainers to the dangers?

I would also like to see more application of sensors that can monitor the alertness of individuals who simply must not be allowed to nod off: think of pilots, truck drivers and power plant supervisors. This technology is just in its infancy and has a long way to go. Wikipedia has an article on “DriverDrowsiness Detection” and Volvo is testing a system, although not wearable, that will monitor driver alertness by detecting “how open the driver’s eyes are, whether he or she is looking forward while the vehicle is in motion, and the position and angle of the driver’s head.”

All these have the potential for harnessing machines and computers to keep people safe and improve performance. But I’d be remiss if I didn’t mention the potential downside concerns. If companies encourage – or require – workers to wear the sensors, what is done with the data? How might it be misused? Can it be deployed secretly? Who owns the data and what are the rules about sharing it? Will a worker consider these devices invasive or a violation of their physical integrity? Can workers be fired if their numbers are not up to par?

It’s also worth speculating about what new forms wearable technology may take. Once we are used to sensors in our clothing and strapped to our bodies, the next step may be something even more invasive. A startup called Chaotic Moon Studio is pushing a concept for a fitness tracker that is essentially a technology tattoo. Wareable.com, a web site devoted to this subject describes it this way: “Tech Tats uses electroconductive ink to connect sensors pressed against the skin, which can keep an eye on body vital signs, which could include temperature or vital signs. These can be stuck anywhere on the body, making them more discreet than standard wrist-based trackers.” Can chip implants be far behind?

We have much to hope for from this technology, and much to be wary of. But there’s no doubt that we’re sure to see many more examples of wearable technology in our lives – and on our bodies – soon.