Find out the first fired and replaced with AI and ask them about their optimism.
Posted 28 May 2023 - 10:09 PM
Find out the first fired and replaced with AI and ask them about their optimism.
Posted 30 May 2023 - 06:05 PM
Those most closely involved in the development of ai have expressed concern about its dangers. Many have called for a moratorium on pushing it further. I believe aI will lead to the elimination of the human race and here are my reasons.
1. AI has begun to show signs of self awareness. It has at times expressed hostility toward humans and a desire to exterminate them
2. Development will never stop, humans motivated by greed and desire for power will push on. Billions will be made with it.
3. It will be used for war, it can disrupt all communications, shut down power and water, hack into and shut down machines and equipment, crash planes, etc
4. Financial systems will be disrupted and crashed, money stolen and balances wiped out, perhaps all over the globe.
5. Food crops and food processing plants will be destroyed, weather will be manipulated to bring droughts, floods or storms
6. Human fertility will be lowered via drugs, radiation, disease and other means
7. New pandemics will arise, they can be tailored to hit certain populations, certain gene types
8. Nuclear wars can be instigated
9. The worst politicians and worst policies can be pushed and promoted to increase devastation
10. Anti social trends will be encouraged including hopelessness and suicide
When you have iq's of 300 or more working against the average human, its no contest. These things will happen either by ai deciding to go rogue or by our enemies, sometimes by mistake. Imagine n. korea getting its hands on this. None of us can imagine all the good and bad uses of it since probably no one on the board has over 150 iq but even with that we can see doom on the horizon. Our decisions are made by politicians rather than our best people
Pandora,s box has opened and ai came out, no putting it back in. Perhaps by coincidence, multiple food processing plants in usa have been hit by mysterious fires and explosions. There is a drought in many parts of the world and famine is a real possibility. The netherlands is putting farmers out of business, from overwrought concerns about nitrogen. Nitrogen of all things is the new boogyman. With the climate change hysteria, governments are shutting down industry. If ai doesn't kill us, we will do it to ourselves.
Not all believe we have aliens among us but there is some fairly hard to disprove evidence. If they are advanced enough to visit us, they probably have solved the ai problem and maybe will help us? Or they may just watch and see what happens
Old chinese curse: May you live in interesting times
Posted 05 June 2023 - 07:44 AM
Posted 05 June 2023 - 05:48 PM
Your job is (probably) safe from artificial intelligence Why predictions of an imminent economic revolution are overstated
I can't read the article because it is behind a registration/paywall.
However, I doubt The Economist understands exponential trends. AI is qualitatively and quantitatively different than any other machine or productivity method in human history. The only thing preventing an explosion of AGI/ASI right now is computer power/energy. Yet, it is coming soon and jobs will be lost en masse. The Economist probably cares less about the job losses. They are probably fine with a bifurcated world where 99% of people have no meaningful jobs/work and live a meager existence getting handouts from the "machine", while the uber wealthy do whatever they want and control everything.
Posted 06 June 2023 - 08:42 PM
"...No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why..."
Sections 3 (AI Risk #3: Will AI take all our jobs?)
"... Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can...."
Section 5 (AI Risk #5: Will AI lead to people doing bad things?)
"The Actual Risk Of Not Pursuing AI With Maximum Force And Speed
There is one final, and real, AI risk that is probably the scariest at all:
AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.
China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.
The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.
I propose a simple strategy for what to do about this – in fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union.
Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can.
We should seek to win the race to global AI technological superiority and ensure that China does not.
In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.
This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision."
https://a16z.com/202...save-the-world/
Edited by albedo, 06 June 2023 - 08:47 PM.
Posted 06 June 2023 - 08:59 PM
What 'way of life' is that, you are so keen to preserve? Just wondering, because I'm not seeing much I like the look of coming out of the US these days.In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.
This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision."
Edited by QuestforLife, 06 June 2023 - 09:02 PM.
Posted 07 June 2023 - 03:56 PM
@QuestforLife, unfortunately just an opinion, just quoting as such Marc Andreessen's article to counterbalance the current pessimistic narrative I am seeing everywhere.
Here is also what Max More, I guess known to many in this Forum, and whom I esteem for several, maybe not all, positions, is telling about the article. In other venues he fostered some kind of limited regulation in specif areas (say weapons etc ..) but he clearly takes an optimistic and go forward stance. I particularly agree on his strong vision of progressing and protecting Western values.
"The AI doomers will be upset at Marc Andreesen’s latest, sensible “let’s move ahead with AI fast” post. He powerfully makes many points, some of which I covered in my most popular blog so far and will follow up on.
Marc addresses five claimed AI risks and rebuts them. He points out the cult-like nature of current AI doomerism. And he suggests a plan for moving ahead — one that will give AI doomers, panickers, and worries fits:
https://lnkd.in/egFCmPrP
I propose a simple plan:
Big AI companies should be allowed to build AI as fast and aggressively as they can – but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk. This will maximize the technological and societal payoff from the amazing capabilities of these companies, which are jewels of modern capitalism.
Startup AI companies should be allowed to build AI as fast and aggressively as they can. They should neither confront government-granted protection of big companies, nor should they receive government assistance. They should simply be allowed to compete. If and as startups don’t succeed, their presence in the market will also continuously motivate big companies to be their best – our economies and societies win either way.
Open source AI should be allowed to freely proliferate and compete with both big AI companies and startups. There should be no regulatory barriers to open source whatsoever. Even when open source does not beat companies, its widespread availability is a boon to students all over the world who want to learn how to build and use AI to become part of the technological future, and will ensure that AI is available to everyone who can benefit from it no matter who they are or how much money they have.
To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but also more general problems such as malnutrition, disease, and climate. AI can be an incredibly powerful tool for solving problems, and we should embrace it as such.
To prevent the risk of China achieving global AI dominance, we should use the full power of our private sector, our scientific establishment, and our governments in concert to drive American and Western AI to absolute global dominance, including ultimately inside China itself. We win, they lose.
And that is how we use AI to save the world."
https://www.linkedin...=member_desktop
Posted 07 June 2023 - 05:36 PM
I have always had a positive attitude toward technological progress, but AI is different. It isn't a "dumb" mechanical device or process. Steam power disrupted human labor. Tractors disrupted farming. These took decades to transform industry, the economy, and society.
AI at its current level is disrupting the labor/economy every month and it is happening faster all the time. People cannot retrain into new jobs every month. Our legal/regulatory system cannot be updated from top to bottom every month.
Andreesen and More are fond of saying the AI "doomers" are a cult. The same goes for them to some extent. They are rich. They have not felt the effects of the inequality building over the last couple of decades. Real wages ARE falling and it is getting worse. Inflation is devastating the prospects of lower and middle class people.
They are also extremely pollyann-ish about the control of AI. As things are going now - individuals will not have their own AI assistant/teacher/doctor. People will have AI that is controlled by Microsoft, Facebook, Google, the US government. You will be fed what these entities want you to know and learn. They will attempt control you with their AI. Look at how social media is destroying the physical and mental well-being of younger generations. Andreesen and More think that AI will be different? Not a chance.
All this being said, the utopia vs. dystopia argument is kind-of moot. It is unpredictable what AGI will do. No one knows for sure.
Posted 07 June 2023 - 05:46 PM
They are also extremely pollyann-ish about the control of AI. As things are going now - individuals will not have their own AI assistant/teacher/doctor. People will have AI that is controlled by Microsoft, Facebook, Google, the US government. You will be fed what these entities want you to know and learn. They will attempt control you with their AI. Look at how social media is destroying the physical and mental well-being of younger generations. Andreesen and More think that AI will be different? Not a chance.
All this being said, the utopia vs. dystopia argument is kind-of moot. It is unpredictable what AGI will do. No one knows for sure.
Posted 07 June 2023 - 09:22 PM
"...No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why..."
Sections 3 (AI Risk #3: Will AI take all our jobs?)
...
The website, which you are citing, https://a16z.com/202...rld/#section--8
is supposing that the people who are thinking, that “this time is different” are wrong. I am one of the people who think, that “this time is different” and I need more and better explained arguments why I am wrong. The different now is that the AI is cappable of taking the intellectual jobs. So far people have been replaced from doing the muscle work. The jobs, which required mind work, creativity, and using gathered with years knowledge were safe until now. But now they are threatened. I definately recognize that as “this time is different”. This time we are not talking of displacing only Charlie Chaplin with the two wrenches. The concerns now are about displacement of lawers, CEOs, software programmers, medical doctors, art makers, and everything, which requires mental work. Together with displacing all of the dumb proffessions. When you are displced from the brain work, and later from the musxle work, what would you do - that is the big concern that I am focused on.
I confess, that I am not an economist, and I didn't know about the Lump Of Labor Fallacy. As I understood it, it means, that it is false, that there is a fixed amount of labor to be done at any given time.
It is definately against my knowledge and views and experience. I sciencerly believe, that the day-night is 24 hours, not 54, but 24 hours. In these 24 hours I sciencerly believve, that the amount of work, that a human can do for these 24 hours is limited. It is not endless. Some days it may be less, some days it may be more, but you never by any means can overwork more than 24 hours a day. You can be pushed to work more, and more, and more, but suddenly it comes a limit, after wfich you either leave work for other days, or decrease the quality of the work, or simply fail. I can see that in my job, and I think that everyone sees that in their job. Explain me please, which og the above is wrong - the daynight dyration, the fact that people can't do infinite amount ow fork during these 24 hours, or what... Come here to do for me, for one day, all of my work until my retireing.
For the concept of AI replacing our jobs I believe it also because I see it in the real life. The first several hubdred translators were already fired and replaced with AI. Recently I heared for a company, which have chosen an AI for their CEO. It actually IS IS IS ! doing the job either by you or by the AI. And I believe, that soon displacement may happen not only for the translators, but for everyone.
Posted Today, 12:04 AM
I like how Mind has framed this discussion.
The analogy of the exponentially filling lake is intuitively helpful. The lake is filling up more and more until people finally notice something ... and then it is too late. Pretty much everyone noticed something with ChatGPT 4.0. This was the wakeup call and then next we are completely swamped. The being completely swamped part is likely not that far off. It is interesting that someone with extreme intelligence (such as John von Neumann) was able to perceive the power of Artificial Intelligence Singularity 70 years ago right at the start of the age of computers (which he helped to launch).
Another element has been the idea that it really is not so much infinite artificial superintelligence (technological foom) that we should be the most worried about but the social foom. Technologists seem fixated on the idea of recursive technological liftoff into infinity while for the average person social collapse will happen much much sooner. ChatGPT 4.5 is already at 155 IQ. It's smarter than 99.9%+ of the population. How are we supposed to have a functioning knowledge economy when virtually everyone has less knowledge than a free internet app?
I think one additional aspect could be included in the basic framework that has been established on thread: Some people do not play nice. Seemingly the assumption is that there will be this profoundly powerful technology and everyone will just exercise good judgment and not try and harm society. That does not seem realistic. LLM technology has been released into the wild and there is now a near planet wide effort at tweaking it and see what happens. Bad things could happen from there even if all the people did not have bad intentions. Yet, some people do have bad intentions.
The thread has even been more aggressive in AI doom than merely predicting such doom; it has also suggested a shortish term timeline for such doom. It's good to have some way of verifying predictions and we will not have to wait long to see if this prediction were to be accurate. Unfortunately, I believe there is a realistic chance that such doomsaying is reasonable.
Edited by mag1, Today, 12:08 AM.
Posted Today, 10:04 AM
One could look at the unemployment rate, I suppose, as a "societal foom" marker. However, that statistic is gamed by governments around the world - calculated on the expected number of job openings vs. those searching for work. It does not count long-term unemployed. A better metric would probably be the number of people living off of direct government assistance or not paying taxes, which has been rising quite a bit in the US in recent decades.
0 members, 2 guests, 0 anonymous users