Net Neutrality at risk: the Internet as we know it will change

What’s about to happen will affect us all in ways we can’t yet imagine.

The current nondiscrimination principle of “network neutrality” forbids phone and cable companies from blocking or even discriminating between or entering in special business deals to the benefit of some sites over others.

For example, as Suzanne succinctly puts it on “Net neutrality is the idea that all information is created equal, therefore, it should be available to all users of the internet without the interference of big companies stating what can or can’t be viewed. For example, if there was not net neutrality then Google could choose to not allow any Gmail users to receive emails from Yahoo accounts and vice-versa. Also, wireless carriers could sell tiered services that would allow some people to get information faster than others.”

I found this image uncredited on another blog. If you know the author, contact me via the main Blue Mouse Monkey site.

However, net neutrality is “dead man walking”, because the DC Circuit Court is about to rule probably in favor of Verizon.

As Marvin Ammori writes in Wired, “Despite eight years of public and political activism by multitudes fighting for freedom on the internet, a court decision may soon take it away.”

“The implications of such a decision would be profound. Web and mobile companies will live or die not on the merits of their technology and design, but on the deals they can strike with AT&T, Verizon, Comcast, and others. This means large phone and cable companies will be able to “shakedown” startups and established companies in every sector…”

Read the whole article on Wired »

Kaggle: what you can do with big data!

Kaggle website screenshotAs the grim news of the NSA’s data mining sinks in, I’d like to shift gears on that topic and highlight the up side of big data.

Kaggle is a website that hosts competitions for data prediction. Data wizards compete to come up with solutions — solutions that elude experts in all kinds of industries — and so far are beating the experts hands down.

When given the chance to play with data (and write algorithms to analyze it), data scientists are able to see solutions without being distracted by industry assumptions or specialist knowledge. As Kaggle’s Jeremy Howard says, “Specialist knowledge is actually unhelpful.”

Competitions include developing an algorithm to grade student papers, developing a gesture-learning system for Microsoft Kinect, and predicting the biological properties of small molecules being screened as potential drugs. Kaggle has approximately 95,000 data scientists worldwide, from fields such as computer science, statistics, economics and mathematics. The data scientists rely on techniques of data mining and machine learning to predict future trends from current data. Companies, governments, and researchers present data sets and problems and offer prize money for the best solutions.

As Howard says, “Winners of Kaggle competitions tend to be curious and creative people. They come up with a dozen totally new ways to think about the problem.” (New Scientist vol. 216, No. 2893)

Way cool!



Technoschmerz, according to this Boston Globe article on neologisms, means,

“…the emotional pain (schmerz comes from a German word meaning “pain”) caused by difficult interactions with electronic gadgets or unhelpful websites. If you’ve ever felt your cellphone was out to get you, you’ve suffered from technoschmerz.”

This word is long overdue. (Thank you, Kate Greene, for the coinage.) I expect to use it often. If I take my personal experiences with digital technologies (and in running a web design company, I have plenty) and multiply these by the number of people in the world with dependent connections to contemporary digital gadgets, I can only imagine the amount of confusion, delay, errors, and the resulting stress from wrestling with technologies that keep changing, or don’t work intuitively or correctly must be global and massive. Has anyone analyzed the overall cost of this? I wonder what it would amount to when weighed against the overall benefits…

Bruce Sterling's chart of technological adaptation

Bruce Sterling In his book Shaping Things (one of my all-time favorite books, btw) examines the evolving interplay between objects and people. He divides the technosocial realm into several epochs, beginning with ARTIFACTS, which are hand-made, muscle-powered objects, such as spears. Then moving to MACHINES, which are artifacts with moving parts that rely on a non-human, non-animal power source, and require an infrastructure of engineering, distribution, and finance. Think steam engines. Next up is PRODUCTS, and they are mass-produced, non-artisinal, widely distributed, and operate over continental economies of scale. Think blenders. Since 1989 we have been in the age of GIZMOS, according to Sterling. Gizmos are

“…highly unstable,  user-alterable multi-featured objects, commonly programmable, with a brief lifespan. Gizmos offer functionality so plentiful that it is cheaper to import features into the object than it is to simplify it. Gizmos are commonly linked to network service providers; they are not stand-alone objects but interfaces.

Unlike artifacts, machines and products, gizmos have enough functionally to actively nag people. Their deployment demands extensive, sustained, interaction, upgrades, grooming, plug-ins, plug-outs, unsought messages, security threats,…

Sterling goes on to argue that we are moving into the epoch of SPIMES, which are already among us in primitive forms such as the RFID tag. But that’s a topic for another post. For now, GIZMOS are enough to deal with. And according to Sterling, we have long passed the Line of No Return on them. This is the moment when a revolutionary technology becomes the status quo, and a culture has become so reliant that it cannot voluntarily return to the previous technosocial condition, at least not without social collapse.

And dependent we are. Not just on the objects, but the networks that connect them. IMAP email that shows up at home, work, on my iPad, on my iPhone. Dropbox files that do the same. Writeroom for synched notes, BaseCamp for synched project management, FreshBooks for synched book-keeping. Compared with how I managed files and communications a mere two or three years ago, a revolution has taken place in my personal life, and I know it’s been mirrored in the lives of many.

Infographic by Randy Krum,

We are firmly in the age of the GIZMO. Thus I pledge allegiance to the new overlords, and I interact, upgrade, groom, and protect them from security threats whenever they demand it. Because if I fail to nurture these overlords, I become invisible and mute to anyone not standing directly in front of me!

The science and art of democratizing data

Data-visualization virtuosos Fernanda Viegas and Martin Wattenberg create a hybrid “artform” (for lack of a more inclusive term) out of data sets. Straddling the realms of science, design, art, and exploration, these graphics reveal interesting patterns in data.

“Data visualization has historically been accessible only to the elite in academia, business, and government. But in recent years web-based visualizations–ranging from political art projects to news stories–have reached audiences of millions. Unfortunately, while lay users can view many sophisticated visualizations, they have few ways to create them.

To “democratize” visualization, and experiment with new collaborative techniques, we built Many Eyes, a web site where people may upload their own data, create interactive visualizations, and carry on conversations. The goal is to foster a social style of data analysis in which visualizations serve not only as a discovery tool for individuals but also as a means to spur discussion and collaboration.”

Carbon footprint of a Big Mac, by Tim Fiddaman

Carbon footprint of a Big Mac, by Tim Fiddaman

Visualizing data that isn’t normally visualized, or is presented in a new way, tells us different stories about the world. From a kid counting all the socks in his household, to trends in editing wikipedia, to a “social network” of the characters in the bible, Many Eyes shows us new patterns that hadn’t been noticed before.

Wattenberg and Viegas now work with Google on a project called the Big Picture Visualization Group in Cambridge, MA, with the goal of making visualizations available to regular  people via Google.

In planning mode


Audit sketch of an existing website

All websites require planning—that’s so true it’s almost a tautology. But some websites require more planning than others. Blue Mouse Monkey is enjoying an influx of opportunities to overhaul large complex websites, and I’ve been in super-planning mode the last couple of weeks.

As Steve Jobs says, design is often mistakenly ascribed to how something looks, but it’s really about how it works. It’s my job as a web designer to integrate the “how it looks” and the “how it works” according to many factors. There are several useful terms to describe this type of thinking, such as information architecture, interaction design, user experience design, and website architecture.

Historically the term “information architect” is attributed to Richard Saul Wurman, who saw it as the “creating of systemic, structural, and orderly principles to make something work”.

INFORMATION ARCHITECTURE is the categorization of information into a coherent structure, preferably one that the most people can understand quickly, if not inherently.

Understanding how a typical user will experience a decision a website asks them to make (e.g. click on link ‘X’ to access information ‘Y’) takes empathy. It’s the ability to put oneself in the user’s shoes — the user being someone who isn’t nearly as familiar with the website’s content or purpose as my client or I are.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

INTERACTION DESIGN attempts to improve the usability and experience of the product, by first researching and understanding certain users’ needs and then designing to meet and exceed them.

The first conversation I have with clients is always begins with, “Who are your audiences, and what do you ideally want them to do on your site?”

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

USER EXPERIENCE DESIGN most frequently defines a sequence of interactions between a user (individual person) and a system, virtual or physical, designed to meet or support user needs and goals, primarily, while also satisfying systems requirements and organizational objectives.
Typical outputs include:

  • Site Audit (usability study of existing assets)
  • Flows and Navigation Maps
  • User stories or Scenarios
  • Persona (Fictitious users to act out the scenarios)
  • Site Maps and Content Inventory
  • Wireframes (screen blueprints or storyboards)
  • Prototypes (For interactive or in-the-mind simulation)
  • Written specifications (describing the behavior or design)
  • Graphic mockups (Precise visual of the expected end result)

When I plan a website I do all these things, except the Persona one, because that’s more applicable to game design. However, we bring in a focus group to give feedback on nearly-completed websites, so in a sense we have real users acting out the experience of the site.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

WEBSITE ARCHITECTURE is an approach to the design and planning of websites which, like architecture itself, involves technical, aesthetic and functional criteria. As in traditional architecture, the focus is properly on the user and on user requirements. This requires particular attention to web content, a business plan, usability, interaction design, information architecture and web design. For effective search engine optimization it is necessary to have an appreciation of how a single website relates to the World Wide Web.

Since web content planning, design and management come within the scope of design methods, the traditional vitruvian aims of commodity, firmness and delight can guide the architecture of websites, as they do physical architecture and other design disciplines. Website architecture is coming within the scope of aesthetics and critical theory and this trend may accelerate with the advent of the semantic web and web 2.0. Both ideas emphasise the structural aspects of information. Structuralism is an approach to knowledge which has influenced a number of academic disciplines including aesthetics, critical theory and postmodernism. Web 2.0, because it involves user-generated content, directs the website architect’s attention to the structural aspects of information.

Then there’s the issue of users with different levels of familiarity with the Web. Unlike printed forms of communication such as books, newspapers, magazines and brochures, the Web is not something the majority of the population grew up with. Kids today are “digital natives“, but there are plenty of us still around who are “digital immigrants”.

An analogy is our knowledge of The Book. We all know how to read a book, so much so we barely register it as a type of knowledge. We understand the heirarchy of cover, title, table of contents, parts, chapters, appendices, index. We don’t have to consciously remember where to begin, or in what order to experience the content, because we learned that stuff on our mother’s knee. Well, maybe not appendices and indices, but by the time we’re reading those kinds of books, we have a solid framework to slot those categories into. But the Web? We’ve had to learn that as adults. And it’s so new it’s barely been standardized. No wonder many people find websites (and computers in general) frustrating. Humankind has been tossed into a new way of organizing and accessing information, and our brains, accustomed to one method, have had to adapt to another. Not unlike like the Mediaeval monk who has to be taught how to transition from scrolls to a bound book in this comedy sketch.

Not that I’m complaining. Much like how the invention of the printing press led to the spread of liberalism, the Internet communications revolution challenges many traditional structures of knowledge and information by removing gatekeepers to access and expression.

Time for me to get back planning more website architecture. There’s information to organize!

Bang goes the publishing industry

noveller_recentrecent_newsNoveller, the online macroblogging service and “the worlds most popular prose-sharing tool”, celebrated it’s millionth post last week.

“You know, before we came up with Noveller, we had all these friends creating these great 75,000- to 300,000-word works of fiction, but there was no quick, easy, fun way to share them,” cofounder Chuck Gregory said. “To be honest, we were stunned there wasn’t already anything like it out there. It seemed so obvious.”

Those who Novel on a daily basis claim to love the challenge of the utility’s 140-page minimum. “I think everyone has at least one Noveller post in them,” said MIT computer networking expert Rod Baines, who noted that he had just posted a sprawling, nuanced, multigenerational family saga while shopping that afternoon. “And half the fun is just following other people’s Novels…”

There’s more about it at this fine online news magazine…

Hypertext as Historical Hinge

12_nelson_ordhypertextSo what I’m doing now, i.e. creating a blog post, could have been different in a fundamental way, if Ted Nelson had insisted on not letting his model of Hypertext get dumbed down during a project her worked on in 1968?

Okay, here’s some context: Ted Nelson is an American inventor, software designer, usability consultant, systems humanist and visiting Fellow at Oxford. He is best known for coining the terms “hypertext” and “hypermedia”, and pursuing a vision of world-wide hypertext from the early 1960s. According to Ted Nelson’s Wikipedia entry, “The main thrust of his work has been to make computers easily accessible to ordinary people. His motto is: A user interface should be so simple that a beginner in an emergency can understand it within ten seconds.” (Wouldn’t that be wonderful?)

According to a page on NewMedia History by Bill Atkinson, Ted Nelson was “one of the most influential figures in computing”, “on a quest to build creative tools that would transform the way we read and write”.

Nelson was particularly concerned with the complex nature of the creative impulse, and he saw the computer as the tool that would make explicit the interdependence of ideas, drawing out connections between literature, art, music and science, since, as he put it, everything is “deeply intertwingled.”

Nelson’s critical breakthrough was to call for a system of non-sequential writing that would allow the reader to aggregate meaning in snippets, in the order of his or her choosing, rather than according to a pre-established structure fixed by the author.

So nearly 50 years ago Ted Nelson envisioned something a lot like what we know as the World Wide Web. On his own site (which is one of the uglier sites on the Web, but that’s not my point) he says,

In 1960 I had a vision of a world-wide system of electronic publishing, anarchic and populist, where anyone could publish anything and anyone could read it.  (So far, sounds like the web.)

But what we’ve ended up with is a disappointment to him:

But my approach is about literary depth– including side-by-side intercomparison, annotation, and a unique copyright proposal.  I now call this “deep electronic literature” instead of “hypertext,” since people now think hypertext means the web.

In a letter to the editor of New Scientist, 22 July 2006, Ted Nelson wrote:

I coined, you say, the word hypertext in 1963 “while working on ways to make computers more accessible at Brown University in Providence, Rhode Island” (17 June, p 60). But in 1963 I was a dolphin photographer in Miami, nowhere near Brown.

I had become inflamed with ideas and designs for non-sequential literature and media in 1960, but no one would back them, then or now. Not until the late sixties did I spend months at Brown, with no official position and at considerable personal expense, to help them build a hypertext system.

That project dumbed down hypertext to one-way, embedded, non-overlapping links. Its broken and deficient model of hypertext became by turns the structure of the NoteCards and HyperCard programs, the World Wide Web, and XML.

At the time I thought of that structure as an interim model, forgetting the old slogan “nothing endures like the temporary”. XML is only the latest, most publicised, and in my view most wrongful system that fits this description. It is opaque to the laypersons who deserve deep command of electronic literature and media. It gratuitously imposes hierarchy and sequence wherever it can, and is very poor at representing overlap, parallel cross-connection, and other vital non-hierarchical media structures that some people do not wish to recognise.

I believe humanity went down the wrong path because of that project at Brown. I greatly regret my part in it, and that I did not fight for deeper constructs. These would facilitate an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; showing the origins of every quotation; and with a copyright system for frictionless, non-negotiated quotation of any amount at any time.

This amazes me. All along I’ve been thinking XML is marvelous. But when Ted Nelson says, “I believe humanity went down the wrong path because of that project at Brown. I greatly regret my part in it…” I have to take notice. And that the World Wide Web is based on a “broken and deficient model of hypertext”, and XML is a “wrongful system.” Wow. Our lives have been momentously  changed in the last 15 years by an information system of enormous scope and complexity that most ordinary folks like myself never saw coming — and Ted Nelson says we could have had something even better if he’d just stuck to his guns about how a single academic project got built back in the late 60s?