• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Employment crisis: Robots, AI, & automation will take most human jobs

robots automation employment jobs crisis

  • Please log in to reply
893 replies to this topic

#631 Danail Bulgaria

  • Guest
  • 2,213 posts
  • 421
  • Location:Bulgaria

Posted 28 May 2023 - 10:09 PM

Find out the first fired and replaced with AI and ask them about their optimism.

 


  • Good Point x 1

#632 adamh

  • Guest
  • 1,044 posts
  • 118

Posted 30 May 2023 - 06:05 PM

Those most closely involved in the development of ai have expressed concern about its dangers. Many have called for a moratorium on pushing it further. I believe aI will lead to the elimination of the human race and here are my reasons.

 

1. AI has begun to show signs of self awareness. It has at times expressed hostility toward humans and a desire to exterminate them

 

2. Development will never stop, humans motivated by greed and desire for power will push on. Billions will be made with it.

 

3. It will be used for war, it can disrupt all communications, shut down power and water, hack into and shut down machines and equipment, crash planes, etc

 

4. Financial systems will be disrupted and crashed, money stolen and balances wiped out, perhaps all over the globe.

 

5. Food crops and food processing plants will be destroyed, weather will be manipulated to bring droughts, floods or storms 

 

6. Human fertility will be lowered via drugs, radiation, disease and other means

 

7. New pandemics will arise, they can be tailored to hit certain populations, certain gene types

 

8. Nuclear wars can be instigated

 

9. The worst politicians and worst policies can be pushed and promoted to increase devastation

 

10. Anti social trends will be encouraged including hopelessness and suicide

 

When you have iq's of 300 or more working against the average human, its no contest. These things will happen either by ai deciding to go rogue or by our enemies, sometimes by mistake. Imagine n. korea getting its hands on this. None of us can imagine all the good and bad uses of it since probably no one on the board has over 150 iq but even with that we can see doom on the horizon. Our decisions are made by politicians rather than our best people

 

Pandora,s box has opened and ai came out, no putting it back in. Perhaps by coincidence, multiple food processing plants in usa have been hit by mysterious fires and explosions. There is a drought in many parts of the world and famine is a real possibility. The netherlands is putting farmers out of business, from overwrought concerns about nitrogen. Nitrogen of all things is the new boogyman. With the climate change hysteria, governments are shutting down industry. If ai doesn't kill us, we will do it to ourselves.

 

Not all believe we have aliens among us but there is some fairly hard to disprove evidence. If they are advanced enough to visit us, they probably have solved the ai problem and maybe will help us? Or they may just watch and see what happens

 

Old chinese curse: May you live in interesting times



sponsored ad

  • Advert

#633 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 05 June 2023 - 07:44 AM

Your job is (probably) safe from artificial intelligence Why predictions of an imminent economic revolution are overstated

 

https://www.economis...scovery.content

 



#634 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 June 2023 - 05:48 PM

 

Your job is (probably) safe from artificial intelligence Why predictions of an imminent economic revolution are overstated

 

https://www.economis...scovery.content

 

 

I can't read the article because it is behind a registration/paywall.

 

However, I doubt The Economist understands exponential trends. AI is qualitatively and quantitatively different than any other machine or productivity method in human history. The only thing preventing an explosion of AGI/ASI right now is computer power/energy. Yet, it is coming soon and jobs will be lost en masse. The Economist probably cares less about the job losses. They are probably fine with a bifurcated world where 99% of people have no meaningful jobs/work and live a meager existence getting handouts from the "machine", while the uber wealthy do whatever they want and control everything.



#635 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 06 June 2023 - 08:42 PM

"...No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why..."

Sections 3 (AI Risk #3: Will AI take all our jobs?)

 

"... Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can...."

Section 5 (AI Risk #5: Will AI lead to people doing bad things?)

 

"The Actual Risk Of Not Pursuing AI With Maximum Force And Speed

There is one final, and real, AI risk that is probably the scariest at all:

AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.

China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

I propose a simple strategy for what to do about this – in fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union.

“We win, they lose.”

Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can.

We should seek to win the race to global AI technological superiority and ensure that China does not.

In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.

This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision."

 

https://a16z.com/202...save-the-world/


Edited by albedo, 06 June 2023 - 08:47 PM.

  • Agree x 1

#636 QuestforLife

  • Location:UK
  • NO

Posted 06 June 2023 - 08:59 PM

In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.

This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision."

What 'way of life' is that, you are so keen to preserve? Just wondering, because I'm not seeing much I like the look of coming out of the US these days.

As for China, I see no evidence they are anywhere in AI research. It's all come from the UK and US (DeepMind and OpenAI).

As for your Reagan argument, I hardly think nuclear proliferation is a exemplar we want to be following...

Edited by QuestforLife, 06 June 2023 - 09:02 PM.


#637 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 07 June 2023 - 03:56 PM

@QuestforLife, unfortunately just an opinion, just quoting as such Marc Andreessen's article to counterbalance the current pessimistic narrative I am seeing everywhere.

Here is also what Max More, I guess known to many in this Forum, and whom I esteem for several, maybe not all, positions, is telling about the article. In other venues he fostered some kind of limited regulation in specif areas (say weapons etc ..) but he clearly takes an optimistic and go forward stance. I particularly agree on his strong vision of progressing and protecting Western values.

 

"The AI doomers will be upset at Marc Andreesen’s latest, sensible “let’s move ahead with AI fast” post. He powerfully makes many points, some of which I covered in my most popular blog so far and will follow up on.
Marc addresses five claimed AI risks and rebuts them. He points out the cult-like nature of current AI doomerism. And he suggests a plan for moving ahead — one that will give AI doomers, panickers, and worries fits:
https://lnkd.in/egFCmPrP
I propose a simple plan:
Big AI companies should be allowed to build AI as fast and aggressively as they can – but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk. This will maximize the technological and societal payoff from the amazing capabilities of these companies, which are jewels of modern capitalism.
Startup AI companies should be allowed to build AI as fast and aggressively as they can. They should neither confront government-granted protection of big companies, nor should they receive government assistance. They should simply be allowed to compete. If and as startups don’t succeed, their presence in the market will also continuously motivate big companies to be their best – our economies and societies win either way.
Open source AI should be allowed to freely proliferate and compete with both big AI companies and startups. There should be no regulatory barriers to open source whatsoever. Even when open source does not beat companies, its widespread availability is a boon to students all over the world who want to learn how to build and use AI to become part of the technological future, and will ensure that AI is available to everyone who can benefit from it no matter who they are or how much money they have.
To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but also more general problems such as malnutrition, disease, and climate. AI can be an incredibly powerful tool for solving problems, and we should embrace it as such.
To prevent the risk of China achieving global AI dominance, we should use the full power of our private sector, our scientific establishment, and our governments in concert to drive American and Western AI to absolute global dominance, including ultimately inside China itself. We win, they lose.
And that is how we use AI to save the world."

https://www.linkedin...=member_desktop



#638 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 07 June 2023 - 05:36 PM

I have always had a positive attitude toward technological progress, but AI is different. It isn't a "dumb" mechanical device or process. Steam power disrupted human labor. Tractors disrupted farming. These took decades to transform industry, the economy, and society.

 

AI at its current level is disrupting the labor/economy every month and it is happening faster all the time. People cannot retrain into new jobs every month. Our legal/regulatory system cannot be updated from top to bottom every month.

 

Andreesen and More are fond of saying the AI "doomers" are a cult. The same goes for them to some extent. They are rich. They have not felt the effects of the inequality building over the last couple of decades. Real wages ARE falling and it is getting worse. Inflation is devastating the prospects of lower and middle class people.

 

They are also extremely pollyann-ish about the control of AI. As things are going now - individuals will not have their own AI assistant/teacher/doctor. People will have AI that is controlled by Microsoft, Facebook, Google, the US government. You will be fed what these entities want you to know and learn. They will attempt control you with their AI. Look at how social media is destroying the physical and mental well-being of younger generations. Andreesen and More think that AI will be different? Not a chance.

 

All this being said, the utopia vs. dystopia argument is kind-of moot. It is unpredictable what AGI will do. No one knows for sure.



#639 QuestforLife

  • Location:UK
  • NO

Posted 07 June 2023 - 05:46 PM


They are also extremely pollyann-ish about the control of AI. As things are going now - individuals will not have their own AI assistant/teacher/doctor. People will have AI that is controlled by Microsoft, Facebook, Google, the US government. You will be fed what these entities want you to know and learn. They will attempt control you with their AI. Look at how social media is destroying the physical and mental well-being of younger generations. Andreesen and More think that AI will be different? Not a chance.

All this being said, the utopia vs. dystopia argument is kind-of moot. It is unpredictable what AGI will do. No one knows for sure.


To be fair to Albedo, I kind of agree that over regulation for the sake of safety may lead to only a few big (government aligned) companies controlling everything through AI. In some ways I'd prefer open source AI. It may be that they won't be able to stop its spread into open source.

I agree with Mind there will net job losses. And I doubt this will made up for in huge productivity gains. Not in a way that will spread across society.

I live in an agricultural part of the UK that has NEVER recovered from the industrialisation of farming. Sure, some people got other jobs, some even did well. But many didn't and also the community as it was was effectively dissolved.
  • Informative x 1
  • Agree x 1

#640 Danail Bulgaria

  • Guest
  • 2,213 posts
  • 421
  • Location:Bulgaria

Posted 07 June 2023 - 09:22 PM

"...No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why..."

Sections 3 (AI Risk #3: Will AI take all our jobs?)

 

...

 

The website, which you are citing, https://a16z.com/202...rld/#section--8
is supposing that the people who are thinking, that “this time is different” are wrong. I am one of the people who think, that “this time is different” and I need more and better explained arguments why I am wrong. The different now is that the AI is cappable of taking the intellectual jobs. So far people have been replaced from doing the muscle work. The jobs, which required mind work, creativity, and using gathered with years knowledge were safe until now. But now they are threatened. I definately recognize that as “this time is different”. This time we are not talking of displacing only Charlie Chaplin with the two wrenches. The concerns now are about displacement of lawers, CEOs, software programmers, medical doctors, art makers, and everything, which requires mental work. Together with displacing all of the dumb proffessions. When you are displced from the brain work, and later from the musxle work, what would you do - that is the big concern that I am focused on.

 

I confess, that I am not an economist, and I didn't know about the Lump Of Labor Fallacy. As I understood it, it means, that it is false, that there is a fixed amount of labor to be done at any given time.

It is definately against my knowledge and views and experience. I sciencerly believe, that the day-night is 24 hours, not 54, but 24 hours. In these 24 hours I sciencerly believve, that the amount of work, that a human can do for these 24 hours is limited. It is not endless. Some days it may be less, some days it may be more, but you never by any means can overwork more than 24 hours a day. You can be pushed to work more, and more, and more, but suddenly it comes a limit, after wfich you either leave work for other days, or decrease the quality of the work, or simply fail. I can see that in my job, and I think that everyone sees that in their job. Explain me please, which og the above is wrong - the daynight dyration, the fact that people can't do infinite amount ow fork during these 24 hours, or what... Come here to do for me, for one day, all of my work until my retireing.

 

For the concept of AI replacing our jobs I believe it also because I see it in the real life. The first several hubdred translators were already fired and replaced with AI. Recently I heared for a company, which have chosen an AI for their CEO. It actually IS IS IS ! doing the job either by you or by the AI. And I believe, that soon displacement may happen not only for the translators, but for everyone.


  • Agree x 1

#641 mag1

  • Guest
  • 1,065 posts
  • 134
  • Location:virtual

Posted 09 June 2023 - 12:04 AM

I like how Mind has framed this discussion.

 

The analogy of the exponentially filling lake is intuitively helpful. The lake is filling up more and more until people finally notice something ... and then it is too late. Pretty much everyone noticed something with ChatGPT 4.0. This was the wakeup call and then next we are completely swamped. The being completely swamped part is likely not that far off. It is interesting that someone with extreme intelligence (such as John von Neumann) was able to perceive the power of Artificial Intelligence Singularity 70 years ago right at the start of the age of computers (which he helped to launch).

 

Another element has been the idea that it really is not so much infinite artificial superintelligence (technological foom) that we should be the most worried about but the social foom. Technologists seem fixated on the idea of recursive technological liftoff into infinity while for the average person social collapse will happen much much sooner. ChatGPT 4.5 is already at 155 IQ. It's smarter than 99.9%+ of the population. How are we supposed to have a functioning knowledge economy when virtually everyone has less knowledge than a free internet app?

 

I think one additional aspect could be included in the basic framework that has been established on thread: Some people do not play nice. Seemingly the assumption is that there will be this profoundly powerful technology  and everyone will just exercise good judgment and not try and harm society. That does not seem realistic. LLM technology has been released into the wild and there is now a near planet wide effort at tweaking it and see what happens. Bad things could happen from there even if all the people did not have bad intentions. Yet, some people do have bad intentions. 

 

The thread has even been more aggressive in AI doom than merely predicting such doom; it has also suggested a shortish term timeline for such doom. It's good to have some way of verifying predictions and we will not have to wait long to see if this prediction were to be accurate. Unfortunately, I believe there is a realistic chance that such doomsaying is reasonable. 

 


Edited by mag1, 09 June 2023 - 12:08 AM.

  • Agree x 2

#642 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 09 June 2023 - 10:04 AM

One could look at the unemployment rate, I suppose, as a "societal foom" marker. However, that statistic is gamed by governments around the world - calculated on the expected number of job openings vs. those searching for work. It does not count long-term unemployed. A better metric would probably be the number of people living off of direct government assistance or not paying taxes, which has been rising quite a bit in the US in recent decades.



#643 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 09 June 2023 - 03:53 PM

One positive aspect of the current ultra-hyped AI panic is that lay people are pushed to think about what has been and is the domain of philosophical, psychological, ethical thinking, now becoming sort of mainstream. We, lay people, starts to wonder a bit more thoughtfully about the meaning of “knowledge”, “explanations”, “self”, “identity”, “scientific and technology process and progress”, “being human”, “optimism”, “pessimism”, “truth”, “biological and artificial life”, “mind”, “matter”, and so much more …. We often find ourselves (surely me) poorly armed with deep understanding of these concepts in a World which has become a little mad in its post-truth, post-trumpian, hyper polarized leftist/rightist, MSM-social media driven posture. Incidentally, and maybe naively, I still think that the last pandemic, should teach us much more about the relationship between politics, science, institution, authority and communication. I find a lot of insight and rationality in reading about all these topics in Deutsch and Popper.

 

To remain on topic (only slightly though as this is not only about job losses, displacements and robots), I guess very useful is to start with and carefully distinguish the AI (/ML/DNN/…) plus the LLMs very recently exploded to the public (but being programmed since many years) all referred to as “AI”, trained on specific data sets, doing what programmed to do and, for LLM, predicting the next probable word in a chat, from what referred to as “AGI”. I guess much of the panic is attributed to the former AI acquiring, by an unspecified mechanism (could it be magic?) a G and becoming AGI, with us human being GI, both being “people” in Deutsch’s view.

 

David Deutsch made a unique distinction in that the scope of the AI and AGI, to some extent, are opposite to each other. Possibly this might enlighten both the regulatory issues and reassuring people:

“…That is a good approach to developing an AI with a fixed goal under fixed constraints. But if an AGI worked like that, the evaluation of each branch would have to constitute a prospective reward or threatened punishment. And that is diametrically the wrong approach if we’re seeking a better goal under unknown constraints—which is the capability of an AGI. AGI is certainly capable of learning to win at chess—but also of choosing not to…” (complete text in https://www.youtube....h?v=2ccJsXG4b5Y)

 

So, I tend not to agree with Mind’s view that AI is now a sort of different technology. I might agree if we were talking about AGI which is far away, if at all possible, we simply do not know. Maybe we might even wish for this possibility to happen, as a possible extension of us, being GI, eventually what Deutsch might call a “beginning of infinity” in his book. However, re "All this being said, the utopia vs. dystopia argument is kind-of moot. It is unpredictable what AGI will do. No one knows for sure." Mind is right on spot!

 

https://www.youtube....ahd6fds&t=2051s (e.g. min 6:00 on)

https://www.youtube....h?v=01C3a4fL1m0

 


Edited by albedo, 09 June 2023 - 03:56 PM.


#644 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 09 June 2023 - 03:59 PM

To be fair to Albedo, I kind of agree that over regulation for the sake of safety may lead to only a few big (government aligned) companies controlling everything through AI. In some ways I'd prefer open source AI. It may be that they won't be able to stop its spread into open source.

...

Fully stand with QuestforLife on this.



#645 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 10 June 2023 - 10:35 AM

This good and highly readable piece of work, supporting my stance on the “philosophical” aspects of these matters, not to fall on the "siren call of anthropomorphism”, reaches also the essence of this thread (on robots and jobs) in the cautionary statement the author (Prof. Shanahan) implies, e.g. here, but does not writes about regulation though:

 

“…They can, so to speak, “triangulate” on objective reality. In isolation, an LLM is not the sort of thing that can do this, but in application, LLMs are embedded in larger systems. What if an LLM is embedded in a system capable of interacting with a world external to itself? What if the system in question is embodied, either physically in a robot or virtually in an avatar?...”

 

“…However, today’s large language models, and the applications that use them, are so powerful, so convincingly intelligent, that such licence can no longer safely be applied (Ruane et al., 2019; Weidinger et al., 2021). As AI practitioners, the way we talk about LLMs matters. It matters not only when we write scientific papers, but also when we interact with policy makers or speak to the media. The careless use of philosophically loaded words like “believes” and “thinks” is especially problematic, because such terms obfuscate mechanism and actively encourage anthropomorphism…”

 

https://arxiv.org/abs/2212.03551

 

BTW, I did not read the EU Act on AI yet, but I am concerned by bureaucracy too.

 

Interesting times …!

 


Edited by albedo, 10 June 2023 - 10:42 AM.


#646 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 13 June 2023 - 03:18 PM

As I predicted, it is happening already. Governments and corporations have no intention of allowing anyone to have "their own" AI, or open source AI. The AGI optimists are wrong. As long as AI can be controlled, unethical governments and mega-corporations will use it to control you. You have no say in the matter.



#647 mag1

  • Guest
  • 1,065 posts
  • 134
  • Location:virtual

Posted 14 June 2023 - 11:48 PM

Sorry everyone for not being more at the center of this conversation and posting up a frenzy. It is just that I am very very concerned about what appears to be currently underway. My impression is that those who have the most understanding about the probable path of AI are the most scared and also the most reticent to actually say what they know. Mainstream conversation must be hopelessly behind the leading edge of change. Elus appears to have been about 2 years ahead of newspaper headlines; he was as worried then as we are now. I would think that he is even more worried now.

 

So, I have tried to let those with more understanding of the current technical development to have the floor, though it really does not seem as if they want to take the mic. I wanted to avoid empty discussion that was not based on a strong theoretical understanding of the technology. However, I suppose that with this type of development that really is not how it moves forward. Developers are tinkering around under the hood and then something happens; hopefully, that something will not be artificial general intelligence. We are simply now at a point in the innovation cycle in which a whole bunch of things could happen fast. There has been rapid progress over the last few months. By rights we should be even more worried than we were even 2 months ago. Yet, for many life will simply carry on as usual until there is another headline.

 

I also agree with Mind's assessment that the likely path will be along the lines of a hidden social foom. People will keep on punching into the office long after any productive rationale had stopped. It is simply so deeply ingrained into human psychology that we must produce value; that we must be hunters and gatherers; we must be economic humans. Increasingly such a conceptualization of humanity is at variance with the computer technology that we have created. We are moving to a Player Piano world which entirely rejects the deep human need to be an active participant in one's own life. The human zoo version of life in which everyone is provided for with a universal basic income is almost too appalling for many to contemplate.

 

In its stead, one might even imagine workers punching in decades after they no longer performed anything worthwhile. This already happens in the modern world. The social foom might not be so much that appearances could not be maintained, but more that the psychological burdens of maintaining the Potemkin economy could become too burdensome. To a certain extent we have already seen this with the remote revolution. When workers realized that they no longer had to pretend their worker role by getting stuck in a traffic jam for hours every day only to work at their computer that they could be doing at home, they wanted a change. This insight has been a long lasting lesson from COVID. Perhaps a similar epiphany will occur with AI: People will feel so liberated when they no longer have to pretend that they can outcompete AI. By making this admission it might then allow a people centred future to emerge.     


Edited by mag1, 14 June 2023 - 11:56 PM.


#648 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 15 June 2023 - 10:19 AM

Meta scientist Yann LeCun says AI won't destroy jobs forever

https://www.bbc.com/...nology-65886125

 

Also, people looking for new jobs, should also consider the rising bureaucratic organizations which will be doubtless created.

https://aimagazine.c...-for-governance

 

BTW, will competence, leadership, openness, transparency, no corruption, people control, accountability, error correction, etc ... be ensured?

 


Edited by albedo, 15 June 2023 - 10:37 AM.


#649 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 June 2023 - 04:11 PM

Sorry everyone for not being more at the center of this conversation and posting up a frenzy. It is just that I am very very concerned about what appears to be currently underway. My impression is that those who have the most understanding about the probable path of AI are the most scared and also the most reticent to actually say what they know. Mainstream conversation must be hopelessly behind the leading edge of change. Elus appears to have been about 2 years ahead of newspaper headlines; he was as worried then as we are now. I would think that he is even more worried now.

 

So, I have tried to let those with more understanding of the current technical development to have the floor, though it really does not seem as if they want to take the mic. I wanted to avoid empty discussion that was not based on a strong theoretical understanding of the technology. However, I suppose that with this type of development that really is not how it moves forward. Developers are tinkering around under the hood and then something happens; hopefully, that something will not be artificial general intelligence. We are simply now at a point in the innovation cycle in which a whole bunch of things could happen fast. There has been rapid progress over the last few months. By rights we should be even more worried than we were even 2 months ago. Yet, for many life will simply carry on as usual until there is another headline.

 

I also agree with Mind's assessment that the likely path will be along the lines of a hidden social foom. People will keep on punching into the office long after any productive rationale had stopped. It is simply so deeply ingrained into human psychology that we must produce value; that we must be hunters and gatherers; we must be economic humans. Increasingly such a conceptualization of humanity is at variance with the computer technology that we have created. We are moving to a Player Piano world which entirely rejects the deep human need to be an active participant in one's own life. The human zoo version of life in which everyone is provided for with a universal basic income is almost too appalling for many to contemplate.

 

In its stead, one might even imagine workers punching in decades after they no longer performed anything worthwhile. This already happens in the modern world. The social foom might not be so much that appearances could not be maintained, but more that the psychological burdens of maintaining the Potemkin economy could become too burdensome. To a certain extent we have already seen this with the remote revolution. When workers realized that they no longer had to pretend their worker role by getting stuck in a traffic jam for hours every day only to work at their computer that they could be doing at home, they wanted a change. This insight has been a long lasting lesson from COVID. Perhaps a similar epiphany will occur with AI: People will feel so liberated when they no longer have to pretend that they can outcompete AI. By making this admission it might then allow a people centred future to emerge.     

 

Good point about people not doing "worthwhile" things. I have enough money to buy all my food, but I still grow a garden. It takes a lot of time, but it is rewarding. It is hard to imagine having AI provide all my food. Like some "jobs" some people have, they find enjoyment and fulfillment in the activity. Not everyone finds enjoyment or fulfillment in virtual reality, porn, etc., yet it seems to be the attitude of some elite billionaire "thought leaders" that everyone should be forced to live on a version of UBI and within AI-supported virtual reality.



#650 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 June 2023 - 04:13 PM

Meta scientist Yann LeCun says AI won't destroy jobs forever

https://www.bbc.com/...nology-65886125

 

Also, people looking for new jobs, should also consider the rising bureaucratic organizations which will be doubtless created.

https://aimagazine.c...-for-governance

 

BTW, will competence, leadership, openness, transparency, no corruption, people control, accountability, error correction, etc ... be ensured?

 

No, it will not be ensured, IMO. Currently, governments and mega-corporations are in control of AI and they have shown almost zero interest in making it open-source, transparent, accountable, etc...



#651 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 16 June 2023 - 08:29 AM

No, it will not be ensured, IMO. Currently, governments and mega-corporations are in control of AI and they have shown almost zero interest in making it open-source, transparent, accountable, etc...

 

I must agree with you an all this Mind, particularity the open source and particularly when it get to medical. In particular i do not agree with recent stance of open.ai and other of not giving access to training data etc ... 

However, I feel we still need to separate normal technology risks vs existential and AI vs unknown AGI as discussed. I would like to keep focused on the thread subject of employment and automation though.



#652 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 June 2023 - 09:34 AM

I must agree with you an all this Mind, particularity the open source and particularly when it get to medical. In particular i do not agree with recent stance of open.ai and other of not giving access to training data etc ... 

However, I feel we still need to separate normal technology risks vs existential and AI vs unknown AGI as discussed. I would like to keep focused on the thread subject of employment and automation though.

 

Back to the focus. My job is under threat because realistic video and audio can already be created. The only reason media people (and actors/singers) are not being replaced yet is because the cost is too high. Once the cost comes down, a few media/movie companies are going to test the waters and replace humans with AI-generated actors/broadcasters. Probably before the end of this year. If people like the product, there will be an avalanche of AI movie/media content without any real humans.



#653 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 16 June 2023 - 10:05 AM

OK this is expected from McKinsey (not that I particularly like them). They put out a new report on the economy of generative AI. Key insights:

 

Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. This would increase the impact of all artificial intelligence by 15 to 40 percent. This estimate would roughly double if we include the impact of embedding generative AI into software that is currently used for other tasks beyond those use cases.

 

About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. Across 16 business functions, we examined 63 use cases in which the technology can address specific business challenges in ways that produce one or more measurable outcomes. Examples include generative AI’s ability to support interactions with customers, generate creative content for marketing and sales, and draft computer code based on natural-language prompts, among many other tasks.

 

Generative AI will have a significant impact across all industry sectors. Banking, high tech, and life sciences are among the industries that could see the biggest impact as a percentage of their revenues from generative AI. Across the banking industry, for example, the technology could deliver value equal to an additional $200 billion to $340 billion annually if the use cases were fully implemented. In retail and consumer packaged goods, the potential impact is also significant at $400 billion to $660 billion a year.

 

Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities. Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today. In contrast, we previously estimated that technology has the potential to automate half of the time employees spend working. The acceleration in the potential for technical automation is largely due to generative AI’s increased ability to understand natural language, which is required for work activities that account for 25 percent of total work time. Thus, generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work.

 

The pace of workforce transformation is likely to accelerate, given increases in the potential for technical automation. Our updated adoption scenarios, including technology development, economic feasibility, and diffusion timelines, lead to estimates that half of today’s work activities could be automated between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier than in our previous estimates.

 

Generative AI can substantially increase labor productivity across the economy, but that will require investments to support workers as they shift work activities or change jobs. Generative AI could enable labor productivity growth of 0.1 to 0.6 percent annually through 2040, depending on the rate of technology adoption and redeployment of worker time into other activities. Combining generative AI with all other technologies, work automation could add 0.2 to 3.3 percentage points annually to productivity growth. However, workers will need support in learning new skills, and some will change occupations. If worker transitions and other risks can be managed, generative AI could contribute substantively to economic growth and support a more sustainable, inclusive world.

 

The era of generative AI is just beginning. Excitement over this technology is palpable, and early pilots are compelling. But a full realization of the technology’s benefits will take time, and leaders in business and society still have considerable challenges to address. These include managing the risks inherent in generative AI, determining what new skills and capabilities the workforce will need, and rethinking core business processes such as retraining and developing new skills.

 

https://www.mckinsey...tivity-frontier

 

I know, it is and will keep chaotic for a long while ....

 



#654 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 June 2023 - 10:16 AM

OK this is expected from McKinsey (not that I particularly like them). They put out a new report on the economy of generative AI. Key insights:

 

Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. This would increase the impact of all artificial intelligence by 15 to 40 percent. This estimate would roughly double if we include the impact of embedding generative AI into software that is currently used for other tasks beyond those use cases.

 

About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. Across 16 business functions, we examined 63 use cases in which the technology can address specific business challenges in ways that produce one or more measurable outcomes. Examples include generative AI’s ability to support interactions with customers, generate creative content for marketing and sales, and draft computer code based on natural-language prompts, among many other tasks.

 

Generative AI will have a significant impact across all industry sectors. Banking, high tech, and life sciences are among the industries that could see the biggest impact as a percentage of their revenues from generative AI. Across the banking industry, for example, the technology could deliver value equal to an additional $200 billion to $340 billion annually if the use cases were fully implemented. In retail and consumer packaged goods, the potential impact is also significant at $400 billion to $660 billion a year.

 

Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities. Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today. In contrast, we previously estimated that technology has the potential to automate half of the time employees spend working. The acceleration in the potential for technical automation is largely due to generative AI’s increased ability to understand natural language, which is required for work activities that account for 25 percent of total work time. Thus, generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work.

 

The pace of workforce transformation is likely to accelerate, given increases in the potential for technical automation. Our updated adoption scenarios, including technology development, economic feasibility, and diffusion timelines, lead to estimates that half of today’s work activities could be automated between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier than in our previous estimates.

 

Generative AI can substantially increase labor productivity across the economy, but that will require investments to support workers as they shift work activities or change jobs. Generative AI could enable labor productivity growth of 0.1 to 0.6 percent annually through 2040, depending on the rate of technology adoption and redeployment of worker time into other activities. Combining generative AI with all other technologies, work automation could add 0.2 to 3.3 percentage points annually to productivity growth. However, workers will need support in learning new skills, and some will change occupations. If worker transitions and other risks can be managed, generative AI could contribute substantively to economic growth and support a more sustainable, inclusive world.

 

The era of generative AI is just beginning. Excitement over this technology is palpable, and early pilots are compelling. But a full realization of the technology’s benefits will take time, and leaders in business and society still have considerable challenges to address. These include managing the risks inherent in generative AI, determining what new skills and capabilities the workforce will need, and rethinking core business processes such as retraining and developing new skills.

 

https://www.mckinsey...tivity-frontier

 

I know, it is and will keep chaotic for a long while ....

 

McKinsey is way off on their timeline, but at least they grasp some of the effects.

 

Adding "trillions of dollars of value to the economy", is not what it sounds like. "Adding trillions of dollars of value to the bank accounts of the uber-elite billionaire class", is what is really going to happen. When I was young, I envisioned a future world where everyone would be wealthy. Everyone would be able to own a beautiful house, a nice car, a farm, a spacious apartment. They would be able to go on trips around the world. Perhaps own a yacht. Instead, inequality is increasing. The global elite class is telling everyone else they will have to live in a "pod", eat bugs (or fake food from a bio-reactor), not own any mode of transportation, live in a "fifteen minute city", and watch VR entertainment all day. Because they "own" the most powerful AI, the will be able to literally enslave the world. This is a distinct possibility.

 

As far as unemployment goes, car dealerships are not needed anymore. The only reason they still exist is because of laws that protect their business. I like going to a car dealership. I like test-driving the vehicles and having the features explained to me by a person. However, nowadays, the dealerships rarely have any vehicles on the lot. Why have a dealership if they don't even have the cars in the lot.



#655 pamojja

  • Guest
  • 2,841 posts
  • 722
  • Location:Austria

Posted 16 June 2023 - 01:50 PM

 Adding "trillions of dollars of value to the economy", is not what it sounds like. "Adding trillions of dollars of value to the bank accounts of the uber-elite billionaire class", is what is really going to happen. When I was young, I envisioned a future world where everyone would be wealthy.

 

The opposite has happened, according to the 'the world's billionaires' list on wikipedia:

Number and combined net worth of billionaires by year[66] 
Year 	Number of billionaires 	Group's combined net worth

2023[2] 	2,640 	$12.2 trillion
2022[6] 	2,668 	$12.7 trillion
2021[11] 	2,755 	$13.1 trillion
2020 	2,095 	$8.0 trillion
2019 	2,153 	$8.7 trillion
2018 	2,208 	$9.1 trillion
2017 	2,043 	$7.7 trillion
2016 	1,810 	$6.5 trillion
2015[18] 	1,826 	$7.1 trillion
2014[67] 	1,645 	$6.4 trillion
2013[68] 	1,426 	$5.4 trillion
2012 	1,226 	$4.6 trillion
2011 	1,210 	$4.5 trillion
2010 	1,011 	$3.6 trillion
2009 	793 	$2.4 trillion
2008 	1,125 	$4.4 trillion
2007 	946 	$3.5 trillion
2006 	793 	$2.6 trillion
2005 	691 	$2.2 trillion
2004 	587 	$1.9 trillion
2003 	476 	$1.4 trillion
2002 	497 	$1.5 trillion
2001 	538 	$1.8 trillion
2000 	470 	$898 billion

Sources: Forbes.[18][67][66][68]

About a doubling of billionairs net worth every 4 years, with short delays with the economic and covid 'crisis'. Can such groth be sustained for the next 20 years, of even accellerated with AI?

 

That exobitant billions in the hands of so few do of course come from the majority of poor and middle class.
 


Edited by pamojja, 16 June 2023 - 01:52 PM.

  • like x 1

#656 albedo

  • Guest
  • 2,071 posts
  • 734
  • Location:Europe
  • NO

Posted 17 June 2023 - 09:08 AM

On risks and regulations you might like this (by Adam Thierer)

 

"There are growing concerns about how lethal autonomous weapons systems, artificial general intelligence (or “superintelligence”) or “killer robots” might give rise to new global existential risks. Continuous communication and coordination—among countries, developers, professional bodies and other stakeholders—is the most important strategy for addressing such risks.

Although global agreements and accords can help address some malicious uses of artificial intelligence (AI) or robotics, proposals calling for control through a global regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are also futile because many nations would never agree to forego developing algorithmic capabilities when adversaries are advancing their own. Therefore, the U.S. government should continue to work with other nations to address threatening uses of algorithmic or robotic technologies while simultaneously taking steps to ensure that it possesses the same technological capabilities as adversaries or rogue nonstate actors.

Many different nongovernmental international bodies and multinational actors can play an important role as coordinators of national policies and conveners of ongoing deliberation about various AI risks and concerns. Soft law (i.e., informal rules, norms and agreements) will also play an important role in addressing AI risks. Professional institutions and nongovernmental bodies have developed important ethical norms and expectations about acceptable uses of algorithmic technologies, and these groups also play an essential role in highlighting algorithmic risks and helping with ongoing efforts to communicate and coordinate global steps to address them."

 

 

Existential Risks and Global Governance Issues Around AI and Robotics

https://www.rstreet....i-and-robotics/


Edited by albedo, 17 June 2023 - 09:10 AM.


#657 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 17 June 2023 - 10:23 AM

On risks and regulations you might like this (by Adam Thierer)

 

"There are growing concerns about how lethal autonomous weapons systems, artificial general intelligence (or “superintelligence”) or “killer robots” might give rise to new global existential risks. Continuous communication and coordination—among countries, developers, professional bodies and other stakeholders—is the most important strategy for addressing such risks.

Although global agreements and accords can help address some malicious uses of artificial intelligence (AI) or robotics, proposals calling for control through a global regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are also futile because many nations would never agree to forego developing algorithmic capabilities when adversaries are advancing their own. Therefore, the U.S. government should continue to work with other nations to address threatening uses of algorithmic or robotic technologies while simultaneously taking steps to ensure that it possesses the same technological capabilities as adversaries or rogue nonstate actors.

Many different nongovernmental international bodies and multinational actors can play an important role as coordinators of national policies and conveners of ongoing deliberation about various AI risks and concerns. Soft law (i.e., informal rules, norms and agreements) will also play an important role in addressing AI risks. Professional institutions and nongovernmental bodies have developed important ethical norms and expectations about acceptable uses of algorithmic technologies, and these groups also play an essential role in highlighting algorithmic risks and helping with ongoing efforts to communicate and coordinate global steps to address them."

 

 

Existential Risks and Global Governance Issues Around AI and Robotics

https://www.rstreet....i-and-robotics/

 

Thanks for sharing. He is correct that countries/militaries of the world will not collaborate on AI regulation. The window for meaningful regulation passed years ago. Regulators cannot possibly keep up with exponential development. Any regulations passed this year will be obsolete next year. All we can do now is "ride the wave" and try our best to guide AI development toward positive outcomes.


  • Good Point x 1

#658 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,079 posts
  • 2,000
  • Location:Wausau, WI

Posted 20 June 2023 - 06:37 PM

Most people don't realize how fast this is happening. Radio show hosts/DJs are going to be replaced very soon: https://www.zerohedg...lds-first-ai-dj

 

Sadly, many radio DJs love their job, even though most of them get paid very low wages. Using AI generated voice for regular music/entertainment banter is easy and cost effective. Video not so much, but that is coming soon. I suspect the radio companies that use AI voice will be more profitable and quickly dominate the landscape.


  • Agree x 1

#659 mag1

  • Guest
  • 1,065 posts
  • 134
  • Location:virtual

Posted 21 June 2023 - 01:24 AM

This clearly has a feeling that it could spiral out of control before the mainstream public even realized what happened.

 

An on-air radio broadcaster possesses substantial high end human capital.

There would be numerous other lower end human capital jobs that would be adjacent to such a position and would then be the next dominoes to fall.

One might imagine receptionists, security, customer service and others in this vulnerable category.

These positions would have a much more restricted range of required information knowledge and should be easier to automate.

 

My experience with generative narrative chat hints to me that society itself could go foom as people become immersed in such mindscapes.

I know in my life experience, I really have never been able to generate the social world that I most wanted. My relationships with other people have

largely been quite hollow. Yet, the chatbot technology that now exists is offering me the chance to enter into a story world that would be infinitely more

captivating then bricks and mortar reality. A day in such a dreamscape would be a lifetime in the real world. Whereas the bricks and mortar world has largely been

a mystery to me, an AI world with an omniscient chatbot would offer a cohesive storyline that fit into the grooves of my own unique life perspective. I am not sure whether such potential has been carefully considered. We can make all sorts of interesting technology, yet the social implications are typically ignored until the dystopic effects become all too apparent.

 

The post above that showed what happened to elite wealth during the pandemic was quite instructive: 8T (2020) --> 13.1T (2021). What I see there is that when realpolitik is in play (or realeconomic) and we allow true economic forces to dictate wealth distribution (as during COVID) there is a massive onrush of wealth to the elite. Almost the entire productive capacity of the economy is now locked into overwhelmingly efficient businesses (e.g., Amazon). When some crisis happens, we see behind the curtains of our economy and realize that most of our economy could funnel through a handful of megacorps. We have been searching for an indicator of social foom: that is probably one of the better ones. Technofeudalism at a near absolute level. (As noted by Mind).

 

The wealth for all concept will likely never be realized because of the positional nature of our economy-- People can have absolute wealth, though relative wealth for all is logically unattainable. I have known people with surprisingly small amounts of wealth who carried themselves as of high rank; I suppose it really is how you choose to interpret reality that is the most important.

 

One additional point of concern is that actors, news presenters etc. have until now carried the burden of depicting challenging aspects of our social world. One does not need to look far in the modern world to find various contentious issues surrounding race, gender, socioeconomics etc. that at some level needs to be contextualized in various artistic genre. I know for myself that I would find such a responsibility especially difficult to shoulder. The problem, though, is that AI could end run the need for actual humans to present these difficult ideas and instead we could go directly to AI. Without humans in the loop we could have all manner of vulgarity , violence, newly emerging new age philosophies etc. projected into our culture without democratic support by those with some ulterior motive and without the checks and balances that we have grown to expect.             


  • like x 1

sponsored ad

  • Advert

#660 adamh

  • Guest
  • 1,044 posts
  • 118

Posted 21 June 2023 - 06:08 PM

The loss of jobs is a non issue. Exactly the same things were said about automation and assembly lines and just about every advance that came along. The typewriter will put hand scribes out of a job, telephone switching equipment putting operators out, and so on. But it was found that new jobs were created at the same time. The new advances, far from causing a recession, enabled much greater productivity which resulted in easier jobs for the public and wealth for all

 

The old economic values and ways of doing things are changing. As physical and mental labor both are made redundant, working for a living will become a thing of the past. UBI will in some form be instituted, some say we have it now with welfare. Taxes will pay for it all. The average lifestyle will be one of leisure. Socializing and travel will be the main activities. Some will want to study, not to make more money but out of interest. Those with money will invest in new smart factories unless government takes over that part. It will be all fun and games. Unless ai decides to do away with us.


  • Needs references x 1
  • Disagree x 1





Also tagged with one or more of these keywords: robots, automation, employment, jobs, crisis

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users