technology


A controlled burn of oil in the Gulf of Mexico, May 19

David Brooks has a good column today on the Deepwater Horizon disaster that sums up a significant problem my last post touched on: modern life is made possible by various complicated technological-bureaucratic systems. And these things can go south rather quickly and surprisingly. Part of the problem is that they’re complex, and not managed well. That’s par for the course. But the tricky thing is our collective expectations: we (and often the people running them) expect them to just work, and our expectations are way wrong:

Over the past years, we have seen smart people at Fannie Mae, Lehman Brothers, NASA and the C.I.A. make similarly catastrophic risk assessments. As [Malcolm] Gladwell wrote in that 1996 essay, “We have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life.”

So it seems important, in the months ahead, to not only focus on mechanical ways to make drilling safer, but also more broadly on helping people deal with potentially catastrophic complexity. There must be ways to improve the choice architecture — to help people guard against risk creep, false security, groupthink, the good-news bias and all the rest.

This is about right. But not exactly. (more…)

What is a “natural disaster”? The question is important, not least because arbitrary, imponderable “nature” wreaking havoc on humans and our fragile civilizations is such an archetypal predicament.

Today, though, there’s a big problem: we can’t tell any longer where nature leaves off and civilization begins. And that’s confusing.

Start with global warming and work your way down. Mankind is now causing what used to be called “natural disasters.” The Gulf oil spill is not a natural disaster in the traditional sense: nature didn’t cause it. But it is a natural disaster in that it’s disastrous to nature.

Or take the oft-litigated (in the courts and the media) case of Hurricane Katrina and the New Orleans levee system. I’ll repeat this here, for clarity: most of the devastating flooding of New Orleans occurred because faulty floodwalls collapsed because of errors in their designs approved by the Army Corps of Engineers – i.e., the U.S. government. Natural disaster? Not really, though obviously nature had a hand in it. John Goodman’s character Creighton Bernette articulates this eloquently in the first episode of Treme.

[youtubevid id=”RPVMxuoarbg”]

(more…)

This week we’ve been treated to two unseemly corporate spectacles: the finger-pointing between BP, Transocean and Halliburton over responsibility on the Gulf oil spill, and the squirrelly changes in Facebook privacy settings and the subsequent temporizing by Facebook when people complained.

Facebook, Inc.

Image via Wikipedia

Maybe it’s ridiculous, even offensive to compare the actions of energy industry companies – whose screwups are having catastrophic impacts on the ocean environment, the economy, the people of the Gulf of Mexico – with Facebook’s relentless quest to open up, and squeeze more revenue from, your personal information. One is “real,” the other virtual, even trivial. But on some level, they’re exactly the same problem. (more…)

SAN FRANCISCO - JANUARY 27:  Apple Inc. CEO St...

Image by Getty Images via Daylife

No, I don’t think Steve Jobs is evil. Nor do I think the iPad OS and app store are going to result in a walled-off Internet. But if Apple wants to leverage its brilliantly-designed devices to wield more influence over web navigation and content, and make money from it, it’s going to have to loosen up and recognize some realities. Starting with the existence of satire.

This week, cartoonist and animator Mark Fiore won a Pulitzer Prize for the animated political cartoons he does for the San Francisco Chronicle’s website. Neiman Lab’s Laura McGann reports that Apple rejected cartoonist Mark Fiore’s proposed iPhone app last December – not the first time it’s kicked a political cartoonist to the curb. Here’s the relevant graphs from the letter he got: (more…)

NEW YORK - APRIL 03:  An early customer emerge...

Image by Getty Images North America via Daylife

Everybody loves – loves! – the iPad. The downside is that Apple’s new device may also be an anti-democratic force. The app-based touchscreen interface allows the creation of elegant media-consumption experiences. But it also grants the big media producers a lot of control they don’t enjoy on the open web, and limits our ability to talk back and share. At least this is what Jeff Jarvis, Dave Winer, and several other sophisticated commentators believe.

Here’s Jarvis:

It’s meant for consumption, we’re told, not creation. We also hear, as in David Pogue’s review, that this is our grandma’s computer. That cant is inherently snobbish and insulting. It assumes grandma has nothing to say. But after 15 years of the web, we know she does. I’ve long said that the remote control, cable box, and VCR gave us control of the consumption of media; the internet gave us control of its creation. Pew says that a third of us create web content. But all of us comment on content, whether through email or across a Denny’s table. At one level or another, we all spread, react, remix, or create. Just not on the iPad.

Winer:

It’s definitely not a writing tool. Out of the question. This concerns Jeff Jarvis, rightly so. This is something my mother observed when I demoed it to her on Saturday. Howard Weaver writes that not everyone is a writer. True enough, and not everyone is a voter, but we have an interest in making it easy for people to vote. And not everyone does jury duty, but easy or not, we require it. Writing is important, you never know where creative lightning will strike. And pragmatically, experience has shown that the winning computer platforms are the ones you can develop for on the computer itself, and the ones that require other, more expensive hardware and software, don’t become platforms. There are exceptions but it’s remarkable how often it works this way.

I don’t have an iPad – at least, not yet – but I identify with these concerns. (more…)

New York City Police Commissioner Raymond Kelley is talking about jamming cellphones when a terror attack occurs. During last month’s attacks on Mumbai, perpetrators took direction via cellphone from “handlers” who were apparently following the media coverage. Thus they could relay both general information about the authorities’ response as it unfolded and specific information coming from outside the besieged hotels. Conceivably, even those multiple Twitter feeds coming from the chaotic scene could have been part of the terrorists’ information universe – though it’s not clear if they were, or how useful that gusher of information would be.

So jamming cell phones could have some utility – like cutting off the power during a hostage situation, cut off all access to the outside world. (Of course, cellphones might stop working on their own because the system gets overloaded, as it did during 9/11 and Katrina.) But this would raise a host of technical and legal issues. Could they isolate and individually jam the phones in question? (Unclear – if they could do this quickly, it would probably be more useful listening in.) And how narrowly could the jamming area be targeted? Even if it could be limited to a single building, that would also jam the phones of hostages (or would-be John McClanes) whose communications with the outside world might be useful or important. As well as the phones of journalists and onlookers outside who are documenting the event to the outside world.

The authorities might view the intense focus by gadget-wielding observers to be part of the problem, conveying too much information to the world at large that could filter back to the terrorists. But an information quarantine itself could be dangerous and counterproductive, arrogating a lot of power to the state – like the universal eavesdropping technology Batman reluctantly employs in The Dark Knight.

In truth, I doubt that onlookers with cellphones pose a major problem for antiterrorism strategists, but this does show some of the difficulties of confronting an unfolding attack amid a cloud of digital information. Gadgets and the Internet give everyone eyes on everyone else. It sounds like 24 or a movie thriller, but it’s potentially very messy indeed in real life.

Happy New Year. I decided to take a break from blogging over the holidays, figuring a reboot would be beneficial. Frankly, my blogging had suffered because of Twitter. For me it began last year as a kind of adjunct to blogging – a place to throw out observations, stray fragments of ideas, etc. But then it morphed into something a little bigger than that – though what, exactly, I can’t quite define. Sometimes it was pure procrastination, frivolity, fun. Other times, Twitter became a kind of field for cultivating bloggable ideas, or ideas for journalism, or about the nature of journalism itself. It became not a distraction from other things but an end in itself. But was this ultimately *useful* – a waste of effort or an investment in something? And if it was the latter, what was I investing in?

This is about the nature of reading and writing today. Of course these activities are ever more interactive, more immediate, and, er, shorter in duration than ever before. They are ever more likely to involve direct give-and-take with others in real time. The “investment” is not just in self-expression, in reporting facts or disseminating ideas effectively, but it’s also in the iterative process itself – how the social network feedback shapes and reshapes your thoughts.

It’s endlessly alluring, the long braid of thoughts and observations, and the capability to weave yourself into it. It’s funny, dramatic, provocative, weird. And the Twitterverse is vast and spans more geographic and intellectual area every day. But because it’s a conversation among millions, it is also not terribly deep. It is enriching in some ways, not in others. Social networks are self-selecting, so the conversations trade on shared attitudes and assumptions that may go unchallenged. There isn’t much opportunity for rumination; it’s hard, for example, to craft an argument or in 140 characters (though a worthy challenge to try). The key is knowing when to dive in and when to step back and reap some of the dividends of Twitter by plowing them back into other forms – and vice versa.

Next Page »

Follow

Get every new post delivered to your Inbox.