• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
              Advocacy & Research for Unlimited Lifespans

Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


NUWA - one really big and wicked AI

video vision complex

  • Please log in to reply
2 replies to this topic

#1 Dream Big

  • Guest
  • 59 posts
  • 89
  • Location:Canada

Posted 27 November 2021 - 02:41 PM


#2 Mind

  • Life Member, Moderator, Secretary
  • 17,522 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 December 2021 - 11:20 PM

Makes me worry increasingly about fake images and video that are spread online. It keeps getting easier to make them.

sponsored ad

  • Advert

#3 Dream Big

  • Topic Starter
  • Guest
  • 59 posts
  • 89
  • Location:Canada

Posted 02 December 2021 - 10:48 AM

Yes, I agree. The upshot though is the AI makes ones nearly as good as our content, so they are useful if humans read them and didn't know who wrote them. Also generally one truly has to do research before just "believing" what they read on the internet, the AI does often say things that are not exact and correct, when precision is needed. Also if one wants to collect only human made data, they could surf the web using a date constraint.


Also, the AI field is moving incredibly fast. In 2010, AI was more like black and white generated small pictures, and the models (the AI brains) were way way smaller, like 100x less data trained on. In 2000 the best AI was IDK, perhaps some simple Markov Chain, and things like that, not so powerful or interesting. Ray Kurzweil predicted AI to pass the Turing test/ reach human level by 2029, and it seems so. Already GPT-3 does nearly just that, you can sign up for free today on openAI.com and try GPT-3 out now that they just lifted the wait-list. Below I attach 3 of my completions from my first attempt using the prompt I feed to the AI.


And Facebook's Blender chatbot is essentially just GPT, but has one extra thing of importance if you take a look, Personas, all that does is force it to "always" talk about something specific. Notice I for example talk mostly about AI and Cryonics, I don't work on the jobs of cars, rockets, etc. That's what Blender sounds like, and it's really exciting, it comes even more alive. Like me, I do one job, and the one that I learn is hopefully of most interest to my root goal survival. Blender however doesn't learn its own Personas, yet. I hope to fully grasp GPT code soon and contribute to some key areas that need motion to inch it closer to human level AI. Right now DALL-E/ NUWA don't have this thing Blender has, they just talk about all subjects, unfocused, nor learn any new goals. Also they don't have the ability to change where they add the Next Word to, they simply extend the end of the sentence or image of video or song and cannot edit the story afterwards. Same for choosing where to read. This could be solved using a neural attention approach, or by giving it some motor ability to choose where to move its eyes if the data is outside the brain's network and wants to control Windows Explorer motors. You can see on openAI.com a recent blog post called Math Problems, GPT doesn't do so well on these! So that is one area of interest and may be solved by doing what I mentioned.


When humans first started off we made tools and invented "language", which allowed our however_intelligent brains to scale/team up and do what we wanted more. Then we learnt to store them in books and mass produce books, and renting tool shops and adjustable tools, which was much quicker. Then we learnt to make morphing books (displays, speakers) instead of slowly delivering costly solid books, and learnt to make software, which is faster to tweak, clone, and ship, than hardware. The AIs we are making will be able to talk to each other much faster and share their vision out of their brains, they will "run on" our ThinkBook laptops and make our laptops into an even better encyclopedia. The next thing to come is morphing hardware, which will scale/make quicker the product tweaking, production, and cloning. It'll use object profiles, they will be like TVs but nanobots that can see and then become a hammer, then morph from hammer into wrench and then use that wrench. 3D printers we already have but they are slow (for our best 3D printer I found so far: 15 minutes to make a 12 inch high object, using light. There is a 3D light one that may do it near instant, unsure), can't really create big objects, and for moving objects it is harder, I mean books are similar in that it's harder to make a book "play" a video, but a video is where the action really takes place. Imagine your tool morphing to do some procedure, which is much more faster than other methods. Overall nanobots need a lot of engineering and resources, but will provide many "benefits".
I was recently pondering just a simple idea, even though this is probably not well worked etc, it is still mildly interesting: What if we can create just a hundred just-visible non-self-replicating nanobots that fly using air jets (6, one for each side of the cube) and know to stay relative to other nanobots in the air by using the air motors to stabilize location, this way they could be sure where they are reliably or "quantized-ly" can move with each other, and if they want they can become less dense and lock onto each other to become "solid" from fog form. They wouldn't need tweezers, simply they'd use their bodies as a force to move things. The only issue would be getting them to change their global shape to become a different tool, and powering them. Engineering them would be maybe possible, otherwise, if you try hard enough :) :) . Of course, we'd want them to maybe do some procedures and not just change tools for us to use, and that is MUCH harder. But that's missing the point: This simple idea presented here could allow for more possibilities to happen, it can do things that other tools can't do as quickly.
Once human level AI is created and trained on all of our data on the internet in a few weeks (that's about how long GPT-3 took to train on I think it was around 400GBs of text, it's feasible). It will clone its very-educated AI brain 1,000,000 times (won't take long to clone, and running 1M GPT-3s is feasible by 2029), plus they'll work together really nicely due to being clones and being very-educated, doing jobs the original was too busy to do, hence they'll differentiate. They can share vision to us and other AIs, and erase bad memories. They will think 3 times faster due to no need for sleep, eating, excising, etc. Then, if we make them run 3 times faster than human brains, it will cause ~10 years of progress in the AI field to occur in 1 year. Right now only maybe 1M humans work in the AI field, so if we make all AGIs work on ASI, we would have the same power as humanity currently (1M AGIs thinking away on ASI), then 10x faster would cause the 10x progress to happen. In 1 year (maybe 2031) "10 years of progress will occur" and they will edit and test their code in-brain, like we do do make AI, needless of a real world or body, to improve their intelligence beyond human level. Typically such an increase in intelligence is worth 100x more data or the number of agents running, so it'll feel like each agent has aged 100 years wiser or like there is 100 times more humans. With this powerful AI then, now having more useful compute than all of humanity, will be able to figure out how to sooner make cures and nanosystems, to scale up energy, compute, memory, data, and manipulation to enact plans.
One interesting thing is with an AGI, you can pause its mind, then resume it, and it will have 0 change, it will still be "him" and still be "alive". You could upgrade its computer to a faster one with more RAM and HDD and cache, and it'll still be himself, due to the stored memories in the memory. You could also clone him so there is 2 exact copies, and then un-pause them both, and they will still be the same person, you now have TWO of you! You are now 2 people. You can see here already the illusion breaking, we really are machines. You could break apart this AGI into atoms, then given the correct data discovered from the past by running all possible ancestor simulations, recreate the dead brain in a computer. That could mean we could be in a sim. AIs will also want to discover where we came from better. So yes, Cryonics would be ok if done right, you would come out being the same you.

Attached Files

Edited by Dream Big, 02 December 2021 - 11:42 AM.

Also tagged with one or more of these keywords: video, vision, complex

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users