• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Controlled Turking: The Next Big Thing


  • Please log in to reply
7 replies to this topic

#1 Brafarality

  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 26 June 2008 - 06:21 PM


If anyone can develop a forum/social-networking/web2.0 that vibrantly integrates control factors into software-code-turking, it is almost guaranteed to be a hit. [Quick note: think amazon turking sites, forums, etc. This should not be too difficult]

Control factors would include being a fully automated 'middleman' between turker and client: software that scans code for viruses, easter eggs, unwanted anythings, and other malignancy that may be a concern for the client (usually a cash-strapped startup or budget-constrained software development company).

Additionally, you might want to develop or implement a pre-existing algorithm to select interested turkers for each assignment based on various factors such as reliability, past performance, etc. But, it would be wise to add some randomness to the algorithm so that a small group of turkers does not end up dominating the forum.

Controlled Turking: The Next Big Thing on *impulse

Summary: turking+ controlfactors +clients = client trust and a much wider usage of turking for important coding projects and thus, much more traffic and, in 1950s talk, a hit on your hands. :)

Good luck any takers!
Let me know how it works out.

Edited by paulthekind, 26 June 2008 - 06:23 PM.


#2 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 26 June 2008 - 09:22 PM

Interesting concept

As soon as Joseph completes his module, he submits it to the turking site which automatically runs a series of Norton/McAfee like tests on the code to ensure that it contains no viruses, easter eggs, copyright infringements, etc. This whole process takes seconds. Once successful, the code is passed along to the client who then pays a nominal fee for use of the service and a small fee to the turker.


Automation to find these (except perhaps the viruses) would be pretty tough given their nature and the cleverness of the coder to hide them. Couldn't a separate turk task be to independently audit, code review and critique the software that was developed?

sponsored ad

  • Advert

#3 Brafarality

  • Topic Starter
  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 26 June 2008 - 10:07 PM

Interesting concept

As soon as Joseph completes his module, he submits it to the turking site which automatically runs a series of Norton/McAfee like tests on the code to ensure that it contains no viruses, easter eggs, copyright infringements, etc. This whole process takes seconds. Once successful, the code is passed along to the client who then pays a nominal fee for use of the service and a small fee to the turker.


Automation to find these (except perhaps the viruses) would be pretty tough given their nature and the cleverness of the coder to hide them. Couldn't a separate turk task be to independently audit, code review and critique the software that was developed?



Nice! That possibility passed right over me. :)
I am thinking a merger between the two might work best: a turker supplementing his audit with an anti-malignancy module or two, and a few defenses to help prevent cooperation between hacking turkers where one codes and one audits.

#4 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 27 June 2008 - 12:21 AM

Nice! That possibility passed right over me. :)
I am thinking a merger between the two might work best: a turker supplementing his audit with an anti-malignancy module or two, and a few defenses to help prevent cooperation between hacking turkers where one codes and one audits.


Not a bad idea. With the web 2.0 system in place that includes a reputation system (like ebay seller ratings), it might discourage abuse. Also, if turkers know their code is going to be independently audited, they may be more reluctant to introduce any deviant code.

#5 Brafarality

  • Topic Starter
  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 27 June 2008 - 04:08 AM

Nice! That possibility passed right over me. :p
I am thinking a merger between the two might work best: a turker supplementing his audit with an anti-malignancy module or two, and a few defenses to help prevent cooperation between hacking turkers where one codes and one audits.


Not a bad idea. With the web 2.0 system in place that includes a reputation system (like ebay seller ratings), it might discourage abuse. Also, if turkers know their code is going to be independently audited, they may be more reluctant to introduce any deviant code.



We are coalescing a masterpiece. :)
If I knew I had someone interested in being a part of it and who could pressure me into doing it, I could whip this up in a month.

But, without someone beating me up verbally on a daily basis and pressuring me for status updates, my procrastinating, intertiatic self will never do it.

#6 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 27 June 2008 - 05:45 PM

We are coalescing a masterpiece. :)
If I knew I had someone interested in being a part of it and who could pressure me into doing it, I could whip this up in a month.


My interest lie more in integrating human computation (in any form, not just turking) with distributed computing for the sake of advancing aging research. There's an increasing plethora of informatics data pouring out of the lab automation tech and few tools to synthesize it for broad analysis such that it can benefit aging researchers.

But, without someone beating me up verbally on a daily basis and pressuring me for status updates, my procrastinating, intertiatic self will never do it.


Usually a paying customer or VC tends to fill this role quite well. :p

#7 Brafarality

  • Topic Starter
  • Guest
  • 684 posts
  • 42
  • Location:New Jersey

Posted 28 June 2008 - 03:01 PM

We are coalescing a masterpiece. :)
If I knew I had someone interested in being a part of it and who could pressure me into doing it, I could whip this up in a month.


My interest lie more in integrating human computation (in any form, not just turking) with distributed computing for the sake of advancing aging research. There's an increasing plethora of informatics data pouring out of the lab automation tech and few tools to synthesize it for broad analysis such that it can benefit aging researchers.


Whew! That would be database/feed/distribution intensive.

The controlled turking site wouldn't be that easy, but there are a bunch of websites with basic frameworks, off the shelf components, plug ins, etc to help quickly develop that type of site.

The integration of turking and distributed computing, with the results being made available in a lucid database format for aging researchers, however, would be much tougher, though not, by any means, impossible.
A few models out there exist: universities, museums, etc., as well as the more general Lexis Nexis model. I think I'll check out similar sites and check out the page source code for any hints and tips. It might be easier than I am thinking.

Cool idea, though.

sponsored ad

  • Advert

#8 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 28 June 2008 - 07:51 PM

Whew! That would be database/feed/distribution intensive.


Indeed but any more than WoW or Everquest ^ 5 or so? The goal would be to develop a similar distributed architecture and keep the messaging between server and client nodes to a minimum. i.e. let the big iron at the center and cpus, gpus and neurons at the edge nodes do the heavy lifting.

The controlled turking site wouldn't be that easy, but there are a bunch of websites with basic frameworks, off the shelf components, plug ins, etc to help quickly develop that type of site.


I'm not too concerned about the development or code but rather more interested in the conceptual framework and determining how to organize the data most optimally and projecting what tools would add the most value for the long-range fight against aging. The target audience for such a system is the next generation of researchers so anticipating how scientists will be fighting aging 30 years from now is where my head is as I'm of the opinion that only marginal gains will be made by hypothesis testing on animal models or via human trials. in silico biology will accelerate aging research by many orders of magnitude so investing in this now will pay greater dividends than any other approach IMO. It's what those seeking to reach escape velocity are working towards but it won't be there for anybody alive today unless the effort starts immediately.

The integration of turking and distributed computing, with the results being made available in a lucid database format for aging researchers, however, would be much tougher, though not, by any means, impossible.


Heh, if it were easy I'd probably not be interested in it.

A few models out there exist: universities, museums, etc., as well as the more general Lexis Nexis model. I think I'll check out similar sites and check out the page source code for any hints and tips. It might be easier than I am thinking.


I think of CERN when I think of such a project, except instead of an accelerator, I envision a network of central supercomputers crunching data being sent in from hundreds of millions of volunteers simply trying to save themselves :)




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users