In 2016, MinnPost, an independent news outlet in the Twin Cities in Minnesota, ran an excellent feature on the Gopher protocol, an early competitor to the World Wide Web.
If you’ve never heard of Gopher, there’s a good reason for that.
In the early 90s, however, Gopher was emerging as the flagship internet technology that would connect all the files on all the connected computers around the world. It was the creation of a small, plucky team of researchers in jeans and T-shirts at the University of Minnesota (whose mascot is the Golden Gopher). Industry commenters all agreed that the World Wide Web, by contrast, felt a little too esoteric.
In 1991, the Gopher developers released version 1.0. “It was the first viral software,” developer Bob Alberti told the MinnPost. “All these people started calling [the university] and pestering the president and other administrators, saying, ‘This Gopher thing is great, when are you going to release a new version?’”
This was one of the first big indications that the internet was going to become a thing. The world was ready to start communicating at a global level. The hardware was there, and the software was taking its first steps.
So the following year, the university, realizing it owned a goose capable of laying golden eggs, announced it would charge licensing fees to Gopher users — “hundreds or thousands of dollars, depending on the size and nature of their business,” MinnPost reporter Tim Gihring wrote.
Meanwhile, Sir Tim Berners-Lee, inventor of the World Wide Web, was adamant that his employer, CERN, “relinquish all property rights to this code” — that was the wording from the April 30, 1993, announcement out of Switzerland.
The Web was going open-source; Gopher was going walled garden.
And the implications became clear quickly. As Gihring reported, Gopher’s traffic grew nearly 1,000 percent in 1993.
The World Wide Web grew by 341,634 percent during that same time.
A quarter of a century later, many organizations have failed to understand the lesson that the University of Minnesota learned in 1993: When you build technology that can bring billions of people closer together, you can either make a little bit of money in the short term by selling licenses, or your can build a whole new world on top of your technology.
What This Has to Do With the Translation Industry
The world today has a similar demand for interconnectedness that it did back in the early 90s, but the obstacles are different.
Back then, we had the hardware — servers, fiber optic cables, personal computers — to build out the internet, but we lacked the software to make it usable for most people.
Today, we have the hardware and the software. What’s holding people back today is language itself. As we’ve noted before, Google Translate processes 100 billion words every single day. Those are 100 billion points of friction in global conversations.
To us, this sounds like an innovation problem. The software and the business models our industry relies upon too quickly fall into the pennywise, pound-foolish traps that the University of Minnesota did. Expensive licenses for MT software, cumbersome workflows and a mountain of content (that’s growing exponentially) are all creating bottlenecks that stop us from addressing the world’s underlying need to communicate.
The translation industry needs to embrace open-source business models and the innovations they foster.
What an Industry Built On Open-Source Innovation Would Look Like
Drupal engineer James Wilson at Bluespark demonstrates why an open-source model can be such a competitive advantage: There are nearly as many people building Drupal’s software as there are employees at all of Microsoft.
From a purely practical perspective, that’s exciting for any industry. When you have a community of curious, engaged developers who are continuously testing hypotheses, fixing bugs and finding new applications for technology, you have a burning cauldron of collaborative innovation.
In fact, author and innovation researcher Greg Satell argues that most industries going forward will be shaped by the collaborative innovations their communities produce. “Today, the ability to collaborate is becoming a key competitive advantage and open source communities are a prime example of that,” Satell writes. “… What’s becoming clear is that every industry will eventually have to learn the same trick the tech industry has. The future, in large part, will be made of proprietary business built on top of communal technologies.”
Building proprietary apps and businesses on top of communal technologies could solve each of the problems — unmet demand, inefficient workflows, content overload — that hold our industry back. Let’s explore how by going problem-by-problem.
Solving the World’s Demand for Translation
At the time of writing, there are about 3.7 billion people online. The 10 largest languages cover 2.9 billion of those users, but the remaining 800 million-plus people speak smaller languages, and for many of them the internet remains largely unintelligible.
Carl Yao, founder of chat-based mobile translation app Stepes, argues that this base of users is fueling the rise of what he calls the “era of big translation,” analogous to big data, because their demands are going to force developers, companies and whole governments to make more of the internet readable to them.
“Unfortunately, traditional translation paradigms would quickly be overloaded if tasked with handling cross-cultural communication at this scale,” Yao says. Tom Armstrong, writing at the Stepes blog, says that at this point, however, the demand so greatly overwhelms the supply that translation services must be priced at a relatively expensive price point to create an equilibrium.
Therefore, the supply of translation power must increase dramatically to make the cost of translating, say, the Portuguese internet for Tagalog speakers, almost negligible.
This starts by making the translation technologies themselves more abundant and cheaper. Again, we’ve seen this dynamic already in the internet’s growth. Take mobile phones. The walled-garden iOS and the open-source Android operating systems are installed on more than 99 percent of all phones.
The open-source OS has the biggest user base, too: 80 percent of smartphones sold today run Android, James Vincent at The Verge reports.
And, sure, maybe iOS has an aesthetic edge over Android (you’re free to debate this on your own), but Android phones are undeniably cheaper because they’re free of all the restraints imposed by licensing. This gets more phones in more hands, and this is the key to getting the next billion internet users online.
Modernizing Workflows So Translators Can Focus On Their Work
Disjointed workflows cause headaches and create inefficiencies in all industries. But in an industry where the entire production line exists in a digital space, everyone from the client to the translation project manager to the translator to the editor should be working in the same collaborative platform.
Translators shouldn’t have to ping PMs with “Did you get my last email?”-type emails ever again.
Instead, we envision a platform where translators can see and accept available projects, apply MT tools directly, and collaborate inside the document with editors and other translators. Likewise, an organization’s managers would work within that platform, and even clients would receive translations in the platform.
Essentially, we’re imagining a collaborative platform that would be to existing translation management tools what Google Docs was to Office.
And once again, we envision collaboration as being the key element: At the workflow level, where friction between translator and editor disappears, and at the industry level, where developers iterate and build on top of communal software.
The end goal would be interoperability, or ensuring all systems in the translations industry can talk to one another easily. At the moment, however, interoperability is precisely what’s lacking in translations, TAUS founder Jaap van der Meer argues.
As a result, most LSPs have to hire and pay people just to bridge those technology gaps, to smooth over processes that are disrupted simply because the pieces of software don’t work well with one another.
Helping the World Make Sense of the Content and Data it Produces
Yao’s era of big translation is going to eventually run into the same problem the era of big data has: What does the world do with all of this information?
Andrew Joscelyne from LT Innovate has a piece at GALA that underscores this point perfectly. With 4 billion people online, we create a lot of knowledge every single day. Turning that into useful intelligence — i.e. legible language that human minds can process, store and act upon — is a monumental task.
“The breakneck growth in big data propagation due to the combination of sensors, cloud storage, and connectedness is forcing businesses and other organizations to develop solutions that can produce useful knowledge automatically from the data tsunami,” Joscelyne writes. The same will soon be true of a linguistic tsunami.
That means we as a world need scalable tools that can distill all of our chatter into relevant, meaningful ideas.
“How should the industry react to cognitive technology and the shift to knowledge as a platform?” he asks. “Resistance would be a natural corporatist reflex.
“However, we suggest that the translation industry should invent new types of smart cooperation between translators and the machineries of knowledge management. As in many jobs in content-rich industries likely to be impacted by machine learning, the task ahead will be to collaborate and innovate, both sustainably and sometimes disruptively on top of the emerging infrastructure.”
Images by: Ilya Pavlov, dotshock/©123RF Stock Photo, stokkete/©123RF Stock Photo, alphaspirit/©123RF Stock Photo