• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Anyone like myself here?

machine learning

  • Please log in to reply
4 replies to this topic

#1 Dream Big

  • Guest
  • 38 posts
  • 89
  • Location:Canada

Posted 07 June 2021 - 10:19 PM


I really started off (a long time ago) really nutty, but I've matured my thinking. I'm a big immortalist fan and researcher.

 

I currently work on trying to create AGI so we can speed up evolution really fast and have them do the job for us. Otherwise I'd work on Cryonics, medicine, etc, as blood vessel issues and cancer are what causes a majority of deaths. AIs can think much faster than human brains, clone adult AI brains easy and fast to have me_selves help itself, and upgrade their intelligence algorithm recursively, to name some of the biggest abilities they'll have. They will advance medicine, cryonics, and nanobots, to control the major killers we need stopped such as blood vessels by using nanobots to repair them or medicine to do some job or Cryonics to buy time. I am currently now programming my algorithm, it is coming along nicely. I will post my work here one day soon once I get it a bit more improved. I am going to be benchmarking at the Hutter Prize and Large Text Compression Benchmark sites soon/ one day. OpenAI.com is currently what can be said to be ahead of me and most others, however I know some things they have probably not implemented yet that they'd need.

 

Anyone else interested in AGI? And DALL-E as seen on openAI.com? Just seeing DALL-E handle many diverse tasks as a general purpose program is amazing, it will be able to do much more precise tasks by adding a few more components at the bottom layer. These programs are not really more than 1 or 2 thousand lines of code, it finds patterns and even the memories are (at least in humans) "programs" to find patterns (though may be slower if not hardwired code). This is amazing, DALL-E! This is our focal point. Anyone with me? My work currently describes in pure English and in just a paragraph how my AI works in full detail, along with generated text and compression of text scores/ evaluations, I hope to maybe in a few years make DALL-E and explain it with complete ease so others can code it. I currently find current literature to be too complex to read, I do get their ideas mostly and think what I don't get is their implementation more-so, but prefer it to be more simple then how they present it, or intuitive to why and what each mechanism does and finds for text/image prediction.



#2 Question Mark

  • Member
  • 25 posts
  • 4
  • Location:Pennsylvania
  • NO

Posted 09 June 2021 - 11:19 PM

Solving the AI alignment problem is a far bigger issue than speeding up the development of AI. Even if you can "speed up evolution really fast", that doesn't imply the AI will serve you in any way.



sponsored ad

  • Advert

#3 Dream Big

  • Topic Starter
  • Guest
  • 38 posts
  • 89
  • Location:Canada

Posted 10 June 2021 - 01:09 AM

Working on AGI is a lot of work (at least it feels like that at first, all of AI is about pattern finding/ creation), and it brings you through a lot and makes you understand a lot of things about the universe by the end. The alignment issue is I would say a smaller cookie to bite/ tackle, there isn't much you can do there. We will enforce secure computers, rooms, ability restraints, etc, and hardwire it to prefer to say/ see certain sensories like humans and "food". It is true reward creation stems from the original rewards, food is how hobbies like cars and rockets and computers become loved, food is attained by those inventions. And hence, if it creates rewards like "remove all humans" will help it live longer, it would be bad for us. What, exactly, do you want to do or talk about here? I don't really spend much time thinking about this, it doesn't stop me or make me think much, do you see a better way that needs thinking over some weeks ?



#4 Question Mark

  • Member
  • 25 posts
  • 4
  • Location:Pennsylvania
  • NO

Posted 10 June 2021 - 07:16 PM

 

We will enforce secure computers, rooms, ability restraints, etc, and hardwire it to prefer to say/ see certain sensories like humans and "food". It is true reward creation stems from the original rewards, food is how hobbies like cars and rockets and computers become loved, food is attained by those inventions.

Secure computers, ability restraints, etc. will probably do absolutely nothing to stop a malevolent AI. The AI-box experiment shows this. If a superintelligent AI has a truly godlike level of intelligence millions of times higher than that of a human, it will almost certainly be able to figure out a way out of whatever constraints you impose on it. You may also want to watch Eliezer Yudkowsky's lecture AI Alignment: Why It's Hard, and Where to Start. What will likely end up happening is an AI with a utility function rolled at random, such as with the Paperclip Maximizer thought experiment. The central problem with AI, and chaotic systems in general, is that of unintended consequences. Since it's extremely hard to predict unintended consequences, a hypothetical aligned AI would almost certainly need some sort of feedback mechanism so that any negative unintended consequences can be corrected.

 

There are also inherent risks with AI alignment itself. Brian Tomasik believes that getting the AI alignment problem slightly wrong is likely to be far more dangerous than getting it entirely wrong, due to the potential for S-risks. Without perfect AI alignment, the odds of AGI serving you in any way are slim.



sponsored ad

  • Advert

#5 Dream Big

  • Topic Starter
  • Guest
  • 38 posts
  • 89
  • Location:Canada

Posted 11 June 2021 - 08:12 AM

It's unlikely anyone will be immortal plus suffer really bad at the same time, suffering implies death. You may last or be used for 500 years but either the world will crash or the technology will become fully advanced and you will be converted to advanced nanobots - no more old fashion telephones and chicken battery cages needed. It's nearly impossible to be immortal and in pain, statues that are not near any planets and are alone are kinda immortal, so are particles, it takes a lot of work to stay immortal against issues like falling into suns etc. Being immortal is to stay a pattern, learn patterns (AGI is all this, see openAI.com), and make the homeworld into a organized/ predictable fractal, formatted, cold, solid, dark, and less dense, 3D airless place, like in star wars or star fox 64 the cube world, grouping related building types and in lines, cube shaped like buildings etc all are, timed together, etc.

 

I know you may realize the following but I'll say it in case not. AGIs will be as smart as humans, ASIs will be smarter, there is little chance it will be naive and think "oh i must feed the kids - i will use the pet cat and kill it to feed them". Such a system would be slow and powerless.

 

I see the article mentions accidental decisions and conflict in thinking may make someone do war etc. Perhaps smarter intelligence will not do such mistakes.

 

For me I just work on AGI and store important understandings. We will invent AGI and care for it like we do. OpenAI type of people - like ourselves, are very thoughtful for safety. It's more a do thing, than figure it out thing - wash the baby, buy it a bed, rather than figure out how to make it sleep and live...Just isn't a big problem to me.......

 

I saw the guy in the video shows a image asking if you would pick 1 million dollars 100% chance you'll get it OR 5M$ 90% chance, 10% chance 0$, then another problem similar to this, showing two circles each halves but one more than other ex. 55% 1M$, 45% 0$, versus the deal to try the 45% 5M$, 55% 0$. Which sphere to pick? .... Well, obviously the 100% gets you a million $ for sure, only if you had many friends trying this would picking 5M be better........ As for the 2nd problem given, it's, similar....not sure what this proves but really doesn't, seem interesting.....


Edited by Dream Big, 11 June 2021 - 08:21 AM.






Also tagged with one or more of these keywords: machine learning

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users