NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
MTurk is 20 years old today – what did you create with it?
frmersdog 1 hours ago [-]
If there's any justice, a good number of comments will focus on the ethical nightmare MTurk turned out to be. Apologies to the people who worked on it, but it's fair and appropriate for observers to point out when someone has spent their time and energy creating something that is a net negative for the state of society. That's probably the case here.
larodi 10 minutes ago [-]
Happily I then can state we did create nothing based on MTurk as it had this negative ethical side to it from day one.
crossbody 30 minutes ago [-]
"probably". Care to provide reasoning or is this just a knee jerk reaction? Are you familiar with the service and how it works?
maxrmk 1 hours ago [-]
What do you see as net negative about it? I’m familiar with the product but not that aware of how it’s been used.
edoceo 1 hours ago [-]
These are extraordinary claims (yea?). I'm sure there are great stories of opportunity creation and destruction - how could we even measure the net effect?
comrade1234 3 hours ago [-]
My wife had dozens, well probably over 100, handwritten recipes from a dead relative. They were pretty difficult to read. I scanned them and used mturk to have them transcribed.

Most of the work was done by one person - i think she was a woman in the Midwest, it's been like 15-years so the details are hazy. A few recipes were transcribed by people overseas but they didn't stick at it. I had to reject only one transcription.

I used mturk in some work projects too but those were boring and maybe also a little unethical (basically paying people 0.50 to give us all of their Facebook graph data, for example.)

cactusplant7374 33 minutes ago [-]
Do you think ChatGPT could do the same work now? It would be interesting to try it.
pvankessel 25 minutes ago [-]
I used MTurk heavily in its hey-day for data annotation - it was an invaluable tool for collecting training data for large-scale research projects, I honestly have to credit it with enabling most of my early career triumphs. We labeled and classified hundreds of thousands of tweets, Facebook posts, news articles, YouTube videos - you name it. Sure, there were bad actors who gave us fake data, but with the right qualifications and timing checks, and if you assigned multiple Turkers (3-5) to each task, you could get very reliable results with high inter-rater reliability that matched that of experts. Wisdom of the crowd, or the law of averages, I suppose. Paying a living wage also helped - the community always got extremely excited when our HITs dropped and was very engaged, I loved getting thank yous and insightful clarifying questions in our inbox. For most of this kind of work, I now use AI and get comparable results, but back in the day, MTurk was pure magic if you knew how to use it to its full potential. Truthfully I really miss it - hitting a button to launch 50k HITs and seeing the results slowly pour in overnight (and frantically spot-checking it to make sure you weren't setting $20k on fire) was about as much of a rush as you can get in the social science research world.
mtmail 3 hours ago [-]
We asked user to evaluate 300x300 pixel maps. Users were shown two image and had to decide which better matched the title we chose. Answers were something like "left", "right", "both same", "I don't know". Due to misconfiguration the images didn't load for users (they only loaded in our internal network). Still we got plenty of "left" and "right" answers. Random and unusable. Our own fault of course.
danpalmer 35 minutes ago [-]
I have looked at MTurk many times throughout my career. In particular my previous company had a lot of data cleaning, scraping, product tagging, image description, and machine learning built on these. This was all pre-LLM. MTurk always felt like it would be a great solution.

But every time I looked at it I persuaded myself out of it. The docs really down played the level of critical thinking that we could expect, they made it clear that you couldn't trust any result to even human-error levels, you needed to test 3-5 times and "vote". You couldn't really get good results for unstructured outputs instead it was designed around classification across a small number of options. The bidding also made pricing it out hard to estimate.

In the end we hired a company that sat somewhere between MTurk and fully skilled outsourcing. We trained the team in our specific needs and they would work through data processing when available, asking clarifying questions on Slack, and would reference a huge Google doc that we had with various disambiguations and edge cases documented. They were excellent. More expensive that MTurk on the surface, but likely cheaper in the long run because the results were essentially as correct as anyone could get them and we didn't need to check their work much.

In this way I wonder if MTurk never found great product market fit. It languished in AWS's portfolio for most of 20 years. Maybe it was just too limited?

rzzzt 55 minutes ago [-]
I'm not a participant nor creator, just remembering: "Bicycle Built for Two Thousand" recreated IBM's "Daisy Bell" by asking each person to take a short snippet and sing the part: https://youtu.be/Gz4OTFeE5JY

Delightful.

stevejb 2 hours ago [-]
Using the Propser.com data set (a peer-to-peer lending market), I used MTurk to analyze the images of people applying for a loan. This was used in a finance research project with 3 University of Washington professors of Finance.

The idea was that the Prosper data set contained all of the information that a lending officer would have, but they also had user-submitted pictures. We wanted to see if there was value in the information conveyed in the pictures. For example, if they had a puppy or a child in the picture, did this increase the probability that the loan would get funded? That sort of thing. It was a very fun project!

Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1343275

hiddencost 1 hours ago [-]
Yikes. Have you ever considered that you were hurting people?
mtlynch 2 hours ago [-]
I'm a software developer, but I took a brief career break in 2011 to try B2B sales for an ISP. I was the only sales rep with experience as a developer, so I was always looking for ways to use my software skills to get an edge as a sales rep.

The most valuable prospects were businesses in buildings where we had a direct fiber connection. There were sites online that purported to list the buildings and leads that the company bought from somewhere, but the sources were all really noisy. Like 98% of the time, the phone number was disconnected or didn't match the address the source said, so basically nobody used these sources.

I thought MTurk would be my secret weapon. If I could pay someone like $0.10/call to call business and confirm the business name and address, then I'd turn these garbage data sources into something where 100% of the prospects were valid, and none of the sales reps competing with me would have time to burn through these low-probability phone numbers.

The first day, I was so excited to call all the numbers that the MTurk workers had confirmed, and...

The work was all fake. They hadn't called anyone at all. And they were doing the jobs at like 4 AM local time when certainly nobody was answering phones at these businesses.

I tried a few times to increase the qualifications and increase pay, but everyone who took the job just blatantly lied about making the calls and gave me useless data.

Still, I thought MTurk was a neat idea and wish I'd found a better application for it.

slyall 2 hours ago [-]
Never used it since it was only available in the US. Looks like additional countries were not added till Oct 2016, 11 years after it was first launched[0]

None of the companies I've worked for have used it AFAIK, despite them all using AWS. I think I've mostly ignored it as one of the niche AWS products that isn't relevant.

[0] https://blog.mturk.com/weve-made-it-easier-for-more-requeste...

nvarsj 23 minutes ago [-]
I used it when Dropbox came out to get the max 16GB storage. Only cost me a few bucks too.
dr_dshiv 2 hours ago [-]
Hand drawn pictures of mushrooms (to later compare to the 118 carvings on Stonehenge)
edoceo 1 hours ago [-]
Say more!
malshe 3 hours ago [-]
I assisted many professors in data collection for their research in the grad school. Later I also collected data for a couple of my papers. mTurk was very popular in the beginning in large part due to its low cost. Then one day they jacked up their commission so much, it was no more attractive to me. Also, the response/task quality went down significantly. My last time using it was in 2018 for a large scale image labeling task. After doing a pilot run, I concluded I was getting garbage. I went to some other vendor and never returned to mTurk after that.
coderintherye 3 hours ago [-]
I remember participating in the workforce early on transcribing really bad audio recordings along with the cheap survey type stuff. It was pretty neat back in the day.
jonatron 3 hours ago [-]
I was in school and automated some MTurk HITs to make a small amount of money.
kittikitti 2 hours ago [-]
I supported data curation on it in the beginning but it became a popular way to exploit labor. I really love the idea but the main value is specifically to gain advantage over wealth inequality. I really support MTurk and the hard workers on it but I also cannot ignore the negatives.
deadbabe 1 hours ago [-]
With LLMs, I think we will finally have the missing piece needed to make something like MTurk work at scale.

Bad data or false work was a big problem on MTurk, but now LLMs should be able to act as reasonable quality assurance for each and every piece of work a worker commits. The workers can be ranked and graded based on the quality of their work, instantly, instead of requiring human review.

You can also flip the model and have LLMs do the unit of work, and have humans as a verification layer, and the human review sanity checked again by an LLM to ensure people aren’t just slacking off and rubber stamping everything. You can easily do this by inserting blatantly bad data at some points and seeing if the workers pick up on it. Fail the people who are letting bad data pass through.

For a lot of people, I think this will be the future of work. People will go to schools to get rounded educations and get degrees in “Human Cognitive Tasks” which makes them well suited for doing all kinds of random stuff that fills in gaps for AI. Perhaps they will also minor in some niche fields for specific industries. Best of all, they can work their own hours and at home.

hiddencost 1 hours ago [-]
I, uh, you do understand the causality issues here? I'm reminded of the onion headline "Tab of LSD feeling a lot of pressure from tech worker to come up with new ideas"
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 23:33:13 GMT+0000 (Coordinated Universal Time) with Vercel.