Written by Matt Windsor
This spring, Chris Callison-Burch, Ph.D., was in town to share an unusual approach to machine learning. This is one of the hottest topics in computer science: It is behind everything from Google‚Äôs self-driving cars to Apple‚Äôs Siri personal assistant.
Callison-Burch, an assistant professor at the University of Pennsylvania, is building a system that can automatically translate foreign languages into English ‚ÄĒ especially obscure dialects (from an American point of view) that can be of great interest to national security. He was in Birmingham at the invitation of Steven Bethard, Ph.D., a machine learning researcher and assistant professor in the UAB College of Arts and Sciences Department of Computer and Information Sciences.
In order to teach a computer to do something, Callison-Burch explained, you need to give it examples. Lots of examples. For a French-English translation, there are millions of sample texts available on the Internet. For Urdu, not so much.
One way around this problem would be to pay professional translation services thousands of dollars to create the ‚Äúcorpus‚ÄĚ of words you would need to train a computer to translate Urdu automatically. Callison-Burch has pioneered another approach: He paid some random folks on the Internet a few bucks at a time to do the work instead.
Callison-Burch is one of a growing number of researchers using Amazon Mechanical Turk, a service of the giant Internet company that bills itself as a ‚Äúmarketplace for work.‚ÄĚ Mechanical Turk, or MTurk, as it is known, ‚Äúhas almost become synonymous with crowdsourcing,‚ÄĚ Callison-Burch said. Anyone in need of help with a ‚Äúhuman intelligence task‚ÄĚ (Amazon‚Äôs term) can post a job description, and the ‚Äúreward‚ÄĚ they are willing to pay. One recent afternoon, some of the 255,902 tasks available on MTurk included tagging photos on Instagram (4 cents per picture), typing out the text visible in distorted images (1 cent per image) and rating test questions for a biology exam for a researcher at Michigan State University (a penny per question ‚ÄĒ this is a popular price point).
Callison-Burch started out by giving Turkers and professional translators the same tasks. He encountered some trouble at first ‚ÄĒ respondents copying and pasting their assigned sentences into Google Translate, for example. ‚ÄúQuality control is a major challenge,‚ÄĚ Callison-Burch said. ‚ÄúIt is important to design tasks to be simple and easy to understand.‚ÄĚ
|In order to teach a computer to do something, you need to give it examples. Lots of examples.
That‚Äôs where Mechanical Turk can shine.
So he tweaked his assignments to filter out people who weren‚Äôt really native speakers, and added in some clever quality control mechanisms, such as getting additional Turkers to pick the best translations out of multiple versions of the same sentence. Callison-Burch was able to get remarkably close to the professional quality, for ‚Äúapproximately an order of magnitude cheaper than the cost of professional translation,‚ÄĚ he said.
Turk-powered translation could be particularly helpful in translating regional Arabic dialects, Callison-Burch noted. ‚ÄúBecause standard machine translation systems are trained on written text, they don‚Äôt handle spoken language well,‚ÄĚ he said. In a recent study, Callison-Burch and his collaborators found that ‚Äúcomments on Arabic newspaper websites were written in dialect forms about 50 percent of the time.‚ÄĚ A machine learning system trained in these dialects could offer vital clues about where a writer is from in the Middle East, for example, or about ‚Äúhis or her informal relationship with an interlocutor based on word choice.‚ÄĚ
Applications from obesity to philosophy
MTurk‚Äôs brand of ‚Äúartificial artificial intelligence‚ÄĚ (Amazon‚Äôs Turk tagline) could also be applied to other machine learning research at UAB, notes Steven Bethard. ‚ÄúChris‚Äô work is fascinating,‚ÄĚ with applications from medicine to the social sciences, Bethard said.
UAB researchers are already putting MTurk to use. Andrew Brown, Ph.D., a research scientist in the Office of Energetics in the School of Public Health, has tested Turkers‚Äô ability to categorize biomedical research studies. ‚ÄúWe like to do some creative looks at what‚Äôs been published and how,‚ÄĚ Brown said. For arecent paper, Brown and colleagues were interested in systematically evaluating nutrition-obesity studies. They wanted to find out whether studies with results that coincide with popular opinion are more likely to draw attention in the scientific community than studies that contradict the conventional wisdom. (They used citations as a proxy for the scientific community‚Äôs opinion of a paper.)
The first step was to identify all the studies of interest. But ‚Äúthe problem is, there are 25 million papers in PubMed, and sometimes the keywords don‚Äôt work very well,‚ÄĚ Brown said. ‚ÄúIt helps to have a human set of eyes take a look at it.‚ÄĚ Instead of giving Ph.D.-level scientists the job, the researchers turned to MTurk. The Turkers successfully evaluated abstracts to identify appropriate studies and categorize the studied foods, then gathered citation counts for the studies in Google Scholar. (There was no significant link between public and scientific opinion when it came to the papers.)
‚ÄúWe found it to be useful,‚ÄĚ Brown said. ‚ÄúExpecting a perfect rating or an exhaustive rating from microworkers is probably a little premature, but on the other hand even trained scientists make mistakes.‚ÄĚ Brown plans to use crowdsourcing for future studies. ‚ÄúThis is just one more tool to add to our research toolbox,‚ÄĚ he said.
Josh May, Ph.D., an assistant professor in the UAB College of Arts and Sciences Department of Philosophy, has been using MTurk for several years ‚ÄĒ asking Turkers to solve thorny moral dilemmas. ‚ÄúI present participants with hypothetical scenarios and ask them to provide their opinion about them ‚ÄĒ ‚ÄėDid the person act wrongly?‚Äô‚ÄĚ May said. ‚ÄúThen I see whether responses change when the scenarios are slightly different, e.g., when a harm is brought about actively versus passively, or as a means to a goal versus a side effect. Statistical analysis can reveal whether the differences are significant ‚ÄĒ providing evidence about whether the slight changes to the scenarios make a real difference in everyday moral reasoning.‚ÄĚ
|‚ÄúExpecting a perfect rating or an exhaustive rating from microworkers is probably a little premature, but on the other hand even trained scientists make mistakes‚Ä¶. This is just one more tool to add to our research toolbox.‚ÄĚ ‚ÄĒAndrew Brown, Ph.D.|
Social justice and microwork
May, Brown and Callison-Burch share an interest in social justice for Turkers as well. ‚ÄúThe main ethical issue with MTurk is exploitation,‚ÄĚ May said. ‚ÄúThe going rate is often around a quarter for a few minutes of work, which typically adds up to less than the federal minimum wage, even when working quickly. This apparently isn‚Äôt illegal given certain loopholes, but that doesn‚Äôt make it moral. Just because someone will work for pennies doesn‚Äôt mean we should withhold a living wage.‚ÄĚ
May‚Äôs solution for his own research ‚Äúis to estimate the time it will take most workers to complete the task and then pay them enough so that the rate would amount to at least minimum wage.‚ÄĚ Brown takes a similar approach ‚ÄĒ and when the Turkers work more slowly than expected, which drives down their overall wage, ‚Äúthere are bonus systems in place where you can give them something extra,‚ÄĚ he said.
Callison-Burch is using his programming skills to help Turkers earn fair wages. He has created a free browser extension (available at crowd-workers.com) that identifies high-paying jobs and makes it easier to identify job posters who have a large number of complaints.
Crowdsourcing operations such as MTurk represent an untapped resource for scientists of all stripes, Callison-Burch concluded. ‚ÄúIndividual researchers now have access to their own data production companies,‚ÄĚ he said. ‚ÄúNow we can get the data we need to solve problems.‚ÄĚ