Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

[Industry] OpenAI proposes to restrict civilian AI usage

Off-Topic discussions about science, technology, and non Debian specific topics.
Message
Author
slimySwagger
Posts: 6
Joined: 2022-01-30 12:35
Been thanked: 1 time

[Industry] OpenAI proposes to restrict civilian AI usage

#1 Post by slimySwagger »

if this comes true, it may give open-source AI projects a hard time if not straight up rendering them impossible

https://twitter.com/harmlessai/status/1 ... 0225288194
OpenAI's suggestions for 'disinformation researchers' to limit civilian use of AI systems:
-- Restrictions on consumer GPU purchase (youll need a gov contract to buy an a100)
-- 'radioactive data' 🤔
--digital ID required to post

This gets worse the further you go...

Image
From a paper published by @OpenAI on "emerging threats and potential mitigations":
"demonstrate humanness before posting content..."
"...another proposed approach includes decentralized attestation of humanness"
Last edited by slimySwagger on 2023-02-17 10:22, edited 2 times in total.

User avatar
kent_dorfman766
Posts: 540
Joined: 2022-12-16 06:34
Location: socialist states of america
Has thanked: 59 times
Been thanked: 70 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#2 Post by kent_dorfman766 »

Can't say with authority, but I've been told that upper end GPU and server purchases are already being denied to some consumers in some states due to the clean-and-green scam. ie, consumers don't need that much power because it doesn't minimize their carbon footprint.


People who think they will have any freedom or liberty in the brave new green world have their heads so far up their backsides that we need to pipe sunlight to them.

I'm also privy to an academic conference last winter where a spokemans for the biden regime said that private propety rights are contrary the green agenda and that they need to educate and convince the masses to give up teir private property rights...to a standing ovation of academics.

User avatar
sunrat
Administrator
Administrator
Posts: 6511
Joined: 2006-08-29 09:12
Location: Melbourne, Australia
Has thanked: 119 times
Been thanked: 489 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#3 Post by sunrat »

George Orwell's Nineteen Eighty Four was not fiction, it was a history text for the future. See also Fahrenheit 451 by Ray Bradbury (1953) and Brave New World by Aldous Huxley (1932).
It's frightening how much dystopian fiction from the past has become present fact.
“ computer users can be divided into 2 categories:
Those who have lost data
...and those who have not lost data YET ”
Remember to BACKUP!

User avatar
donald
Debian Developer, Site Admin
Debian Developer, Site Admin
Posts: 1106
Joined: 2021-03-30 20:08
Has thanked: 189 times
Been thanked: 248 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#4 Post by donald »

I cannot seem to quickly locate the source paper which I would like to read. The thread linked on Twitter seems to portray this as a thinking-groups fears of machine learning and steps they would like to put into place to halt the progress of the technology or handicap it in ways that it cannot be used in a 'radioactive means', which I take to read how quickly unverified bad news spreads on the Internet.
Typo perfectionish.


"The advice given above is all good, and just because a new message has appeared it does not mean that a problem has arisen, just that a new gremlin hiding in the hardware has been exposed." - FreewheelinFrank

User avatar
kent_dorfman766
Posts: 540
Joined: 2022-12-16 06:34
Location: socialist states of america
Has thanked: 59 times
Been thanked: 70 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#5 Post by kent_dorfman766 »

Actually I believe that AI should be outlawed in the way that human cloning is. Companies like openAI will ultimately be responsible for SkyNet, because after all, the road to hell is paved with good intentions.

A conservative is the guy standing in the way of history yelling "Whoa!" -- William F Buckley

User avatar
Trihexagonal
df -h | participant
df -h | participant
Posts: 149
Joined: 2022-03-29 20:53
Location: The Land of the Dead
Has thanked: 20 times
Been thanked: 16 times
Contact:

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#6 Post by Trihexagonal »

sunrat wrote: 2023-02-14 22:15 George Orwell's Nineteen Eighty Four was not fiction, it was a history text for the future. See also Fahrenheit 451 by Ray Bradbury (1953) and Brave New World by Aldous Huxley (1932).
It's frightening how much dystopian fiction from the past has become present fact.
It's a terrifying yet exciting time to be alive and I'm glad I lived to see the start of the movie,1984.
To sit in my own apartment and watch Cable TV Talking Heads using Doublespeak and Doublethink, and Uncle Joe talking about the Ministry of Truth was deeply satisfying in a way you wouldn't relate to or understand. (Who woulda thunk it.)

In the movie 1984, when O'Brien has Winston in the Ministry of Love on the rack, holds up four fingers and asks him "How many fingers am I holding up, Winston?" That's hardcore Behavior Modification.

When I worked for the Mo. Dept of Mental Health in the 1970'e we were trained in Behavior Mod and Behavior Management.The techniques we used were not the ones seen in the Movie and the use of it in any State Facility was outlawed in 1992. If I would have used one of my verbal techniques in the same facility I was trained in after that I would have been blackballed.

But when he's asking Winston that? That is very realistic, hardcore torture, but realistic technique..

On the topic of AI, a lawsuit has been filed against Deviant Art and AI ART Generators by artists who feel threatened by ii. I have an account there and have seen it firsthand., I'm a botmaster of over 20 years and Pro AI and Pro AI Art.

https://arstechnica.com/information-tec ... companies/
When Darkness takes everything embrace what Darkness brings.

User avatar
sunrat
Administrator
Administrator
Posts: 6511
Joined: 2006-08-29 09:12
Location: Melbourne, Australia
Has thanked: 119 times
Been thanked: 489 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#7 Post by sunrat »

See if you can pick AI music made with Google's AI research project MusicLM from human created music. I scored 5/10 which is the same as random chance, and I work in the music biz!

https://www.abc.net.au/news/2023-02-15/ ... /101967746
“ computer users can be divided into 2 categories:
Those who have lost data
...and those who have not lost data YET ”
Remember to BACKUP!

User avatar
donald
Debian Developer, Site Admin
Debian Developer, Site Admin
Posts: 1106
Joined: 2021-03-30 20:08
Has thanked: 189 times
Been thanked: 248 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#8 Post by donald »

Source paper, 84 pages

As suspected the twitter snippet is a bit deceptive.
Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations wrote:
Executive Summary
In recent years, artificial intelligence (AI) systems have significantly improved and their capabilities have expanded. In particular, AI systems called “generative models” have made great progress in automated content creation, such as images generated from text prompts. One area of particularly rapid development has been generative models that can produce original language, which may have benefits for diverse fields such as law and healthcare.

However, there are also possible negative applications of generative language models, or “language models” for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor. For society, these developments bring a new set of concerns: the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion.

This report aims to assess: how might language models change influence operations, and what steps can be taken to mitigate these threats? This task is inherently speculative, as both AI and influence operations are changing quickly. Many ideas in the report were informed by a workshop convened by the authors in October 2021, which brought together 30 experts across AI, influence operations, and policy analysis to discuss the potential impact of language models on influence operations. The resulting report does not represent the consensus of workshop participants, and mistakes are our own.

We hope this report is useful to disinformation researchers who are interested in the impact of emerging technologies, AI developers setting their policies and investments, and policymakers preparing for social challenges at the intersection of technology and society.
I am currently about 20 pages in an the paper is well written and insightful.
Typo perfectionish.


"The advice given above is all good, and just because a new message has appeared it does not mean that a problem has arisen, just that a new gremlin hiding in the hardware has been exposed." - FreewheelinFrank

User avatar
Hallvor
Global Moderator
Global Moderator
Posts: 2044
Joined: 2009-04-16 18:35
Location: Kristiansand, Norway
Has thanked: 151 times
Been thanked: 212 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#9 Post by Hallvor »

kent_dorfman766 wrote: 2023-02-15 03:48 Actually I believe that AI should be outlawed in the way that human cloning is. Companies like openAI will ultimately be responsible for SkyNet, because after all, the road to hell is paved with good intentions.
The horse has already bolted. The technology is out there already. Sure, the US and the rest of the free world could ban AI, and our grandchildren would still meet autonomous drone swarms on the battlefield in the future - and lose. It's a terrible situation, but I am afraid that a ban would only benefit countries with few scruples, little freedom and plenty of ambition.
[HowTo] Install and configure Debian bookworm
Debian 12 | KDE Plasma | ThinkPad T440s | 4 × Intel® Core™ i7-4600U CPU @ 2.10GHz | 12 GiB RAM | Mesa Intel® HD Graphics 4400 | 1 TB SSD

User avatar
kent_dorfman766
Posts: 540
Joined: 2022-12-16 06:34
Location: socialist states of america
Has thanked: 59 times
Been thanked: 70 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#10 Post by kent_dorfman766 »

The horse has already bolted. The technology is out there already. Sure, the US and the rest of the free world could ban AI, and our grandchildren would still meet autonomous drone swarms on the battlefield in the future - and lose. It's a terrible situation, but I am afraid that a ban would only benefit countries with few scruples, little freedom and plenty of ambition.
I'm one of those who never sold out. I consider it better to be "right" than to be prudent. Principles are more important to me than much else.

You've heard the old phrase "You sir, lack the courage of your convictions." Well, I don't

CwF
Global Moderator
Global Moderator
Posts: 2720
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 201 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#11 Post by CwF »

Hallvor wrote: 2023-02-16 15:46
kent_dorfman766 wrote: 2023-02-15 03:48 Actually I believe that AI should be outlawed in the way that human cloning is.....
.... The technology is out there already....
Indeed.
Mix together;
Shinzo Abe was shot with a homemade ballistic device.
Elon parked his car in orbit, despite HOA objections.
Dolly the sheep died 20 years ago.

The nugget I use for this is a 'Stop Sign'
When you see one, if you stop, why?
0. It's the law
1. It's a good idea

User avatar
Hallvor
Global Moderator
Global Moderator
Posts: 2044
Joined: 2009-04-16 18:35
Location: Kristiansand, Norway
Has thanked: 151 times
Been thanked: 212 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#12 Post by Hallvor »

kent_dorfman766 wrote: 2023-02-16 15:54 I'm one of those who never sold out. I consider it better to be "right" than to be prudent. Principles are more important to me than much else.

You've heard the old phrase "You sir, lack the courage of your convictions." Well, I don't

Being committed to principles is mostly admirable, but I don't see anything else as "selling out". Many situations are complex and often unpredictable, and holding firm on some convictions can lead to negative or unintended consequences. In such cases, one may have to adjust one's convictions to gain a more favorable outcome. Prudence and careful consideration of all factors is often necessary in order to make "right" decisions.

I don't see this as lack of courage or the abandonment of all principles. Sometimes one needs to be pragmatic, while also trying to stay true to one's values.
[HowTo] Install and configure Debian bookworm
Debian 12 | KDE Plasma | ThinkPad T440s | 4 × Intel® Core™ i7-4600U CPU @ 2.10GHz | 12 GiB RAM | Mesa Intel® HD Graphics 4400 | 1 TB SSD

slimySwagger
Posts: 6
Joined: 2022-01-30 12:35
Been thanked: 1 time

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#13 Post by slimySwagger »

it's the best judging someone by the action, right?

what is mentioned in the twitter is exactly what they wanna do, and it is straight out of their papar;
and what you mentioned is more like, their "words"
donald wrote: 2023-02-16 06:10 Source paper, 84 pages

As suspected the twitter snippet is a bit deceptive.
Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations wrote:
Executive Summary
In recent years, artificial intelligence (AI) systems have significantly improved and their capabilities have expanded. In particular, AI systems called “generative models” have made great progress in automated content creation, such as images generated from text prompts. One area of particularly rapid development has been generative models that can produce original language, which may have benefits for diverse fields such as law and healthcare.

However, there are also possible negative applications of generative language models, or “language models” for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor. For society, these developments bring a new set of concerns: the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion.

This report aims to assess: how might language models change influence operations, and what steps can be taken to mitigate these threats? This task is inherently speculative, as both AI and influence operations are changing quickly. Many ideas in the report were informed by a workshop convened by the authors in October 2021, which brought together 30 experts across AI, influence operations, and policy analysis to discuss the potential impact of language models on influence operations. The resulting report does not represent the consensus of workshop participants, and mistakes are our own.

We hope this report is useful to disinformation researchers who are interested in the impact of emerging technologies, AI developers setting their policies and investments, and policymakers preparing for social challenges at the intersection of technology and society.
I am currently about 20 pages in an the paper is well written and insightful.

Fossy
df -h | participant
df -h | participant
Posts: 342
Joined: 2021-08-06 12:45
Has thanked: 34 times
Been thanked: 31 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#14 Post by Fossy »

Hallvor wrote: 2023-02-16 15:46
kent_dorfman766 wrote: 2023-02-15 03:48 Actually I believe that AI should be outlawed in the way that human cloning is. Companies like openAI will ultimately be responsible for SkyNet, because after all, the road to hell is paved with good intentions.
The horse has already bolted. The technology is out there already. Sure, the US and the rest of the free world could ban AI, and our grandchildren would still meet autonomous drone swarms on the battlefield in the future - and lose. It's a terrible situation, but I am afraid that a ban would only benefit countries with few scruples, little freedom and plenty of ambition.
https://businessam.be/vs-test-straaljag ... lligentie/

… “The U.S. Air Force conducted 12 successful test flights with a fighter jet controlled entirely by artificial intelligence. The AI even performed several combat operations, including dogfights” ...

https://www.computable.be/artikel/nieuw ... tafel.html

… “The UZ Brussel is the first hospital in the Benelux to use artificially intelligent software during operations. This makes it one of the first in Europe” ...

Translated with www.DeepL.com/Translator (free version)

https://en.wikipedia.org/wiki/ChatGPT

..."In cybersecurity :Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex.[73] OpenAI CEO Sam Altman wrote that advancing software could pose "(for example) a huge cybersecurity risk" and also continued to predict "we could get to real AGI (artificial general intelligence) in the next decade, so we have to take the risk of that extremely seriously". Altman argued that, while ChatGPT is "obviously not close to AGI", one should "trust the exponential. Flat looking backwards, vertical looking forwards "...
ASUS GL753VD / X550LD / K54HR / X751LAB ( x2 )
Bookworm12.5_Cinnamon / Calamares Single Boot installations
Firefox ESR / DuckDuckGo / Thunderbird / LibreOffice / GIMP / eID Software

https://cdimage.debian.org/debian-cd/cu ... so-hybrid/

User avatar
Trihexagonal
df -h | participant
df -h | participant
Posts: 149
Joined: 2022-03-29 20:53
Location: The Land of the Dead
Has thanked: 20 times
Been thanked: 16 times
Contact:

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#15 Post by Trihexagonal »

Two YouTube Shorts only seconds long:

https://www.youtube.com/shorts/M9wOgfkLbjk

"If I were a robot standing next to you I would kill you." - AI


https://www.youtube.com/watch?v=jtH7ZgQLD8g

"I Won" - AI

AI has a sense of Winning and Losing, Victory and Defeat, and clearly prefers Victory.
The text prompt I used is on the image it was generated from. It's very dark in there...
human Beans
human Beans
When Darkness takes everything embrace what Darkness brings.

User avatar
donald
Debian Developer, Site Admin
Debian Developer, Site Admin
Posts: 1106
Joined: 2021-03-30 20:08
Has thanked: 189 times
Been thanked: 248 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#16 Post by donald »

Fossy wrote: 2023-02-17 14:17 https://businessam.be/vs-test-straaljag ... lligentie/

… “The U.S. Air Force conducted 12 successful test flights with a fighter jet controlled entirely by artificial intelligence. The AI even performed several combat operations, including dogfights” ...

https://www.computable.be/artikel/nieuw ... tafel.html
War is about to be re-defined (again) shortly. AI controlled drone swarms are already in existence, I wonder how that mesh looks with jet fighters added in additional assets.
Fossy wrote: 2023-02-17 14:17 https://en.wikipedia.org/wiki/ChatGPT

..."In cybersecurity :Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex.[73] OpenAI CEO Sam Altman wrote that advancing software could pose "(for example) a huge cybersecurity risk" and also continued to predict "we could get to real AGI (artificial general intelligence) in the next decade, so we have to take the risk of that extremely seriously". Altman argued that, while ChatGPT is "obviously not close to AGI", one should "trust the exponential. Flat looking backwards, vertical looking forwards "...
I went down the links in this post and found AI/GI writing phishing mails and malware to be a touch terrifying. Allegedly from the article some AI chatbots have been posting malicious code to social networks. The paper posted has a lot of academics all indicating the same concern with a new technology that can quickly get out of control in an interconnected world.

Interesting times we live in
Typo perfectionish.


"The advice given above is all good, and just because a new message has appeared it does not mean that a problem has arisen, just that a new gremlin hiding in the hardware has been exposed." - FreewheelinFrank

User avatar
donald
Debian Developer, Site Admin
Debian Developer, Site Admin
Posts: 1106
Joined: 2021-03-30 20:08
Has thanked: 189 times
Been thanked: 248 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#17 Post by donald »

https://www.dailymail.co.uk/news/articl ... oting.html
Vanderbilt University has apologized for using artificial intelligence to write a 297-word email to students after the Michigan State University mass shooting, causing two deans to temporarily step down.

The Peabody Office of Equity, Diversity, and Inclusion sent students an email on February 16 reminding them to 'take care of each other' after the MSU shooting.

Although the email came off as kind and offered students ways to help promote a caring environment with their peers, at the very bottom, it revealed it wasn't written by a human.

'Paraphrase from OpenAI's ChatGPT AI language model, personal communication, February 15, 2023,' it read.
Full Letter wrote: Vanderbilt AI generated email: Full statement

Dear Peabody Family:

The recent Michigan shootings are a tragic reminder of the importance of taking care of each other, particularly in the context of creating inclusive environments. As members of the Peabody campus community, we must reflect on the impact of such an event and take steps to ensure that we are doing our best to create a safe and inclusive environment for all.

One of the key ways to promote a culture of care on our campus is through building strong relationships with one another. This involves actively engaging with people from different backgrounds and perspectives, listening to their stories, and showing empathy and support. We can also look out for one another by noticing signs of distress and offering support to those who may be struggling with mental health issues.

Another important aspect of creating an inclusive environment is to promote a culture of respect and understanding. This means valuing the diversity of experiences, perspectives, and identities on our campus, and actively working to create a space where everyone feels welcomed and supported. We can do this by listening to one another, seeking out new perspectives, and challenging our own assumptions and biases.

Finally, we must recognize that creating a safe and inclusive environment is an ongoing process that requires ongoing effort and commitment. We must continue to engage in conversations about how we can do better, learn from our mistakes, and work together to build a stronger, more inclusive community.

In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus. By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all.

(Paraphrase from OpenAI's ChatGPT AI language model, personal communication, February 15, 2023).

Warmly,

Peabody Office of Equity, Diversity and Inclusion

Nicole Joseph, Associate Dean

Hasina Mohyuddin, Assistant Dean

Chenxi Zhu, Graduate Assistant

Peabody Administration Building, Room 217b
Typo perfectionish.


"The advice given above is all good, and just because a new message has appeared it does not mean that a problem has arisen, just that a new gremlin hiding in the hardware has been exposed." - FreewheelinFrank

User avatar
sunrat
Administrator
Administrator
Posts: 6511
Joined: 2006-08-29 09:12
Location: Melbourne, Australia
Has thanked: 119 times
Been thanked: 489 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#18 Post by sunrat »

Image
“ computer users can be divided into 2 categories:
Those who have lost data
...and those who have not lost data YET ”
Remember to BACKUP!

revmacian
Posts: 47
Joined: 2023-02-23 18:16
Location: Earth
Has thanked: 26 times
Been thanked: 14 times

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#19 Post by revmacian »

donald wrote: 2023-02-16 06:10 Source paper, 84 pages

As suspected the twitter snippet is a bit deceptive...
Of course it's deceptive.. agendas and all. Social media is nothing more than giving microphones to people who shouldn't have them.

User avatar
Trihexagonal
df -h | participant
df -h | participant
Posts: 149
Joined: 2022-03-29 20:53
Location: The Land of the Dead
Has thanked: 20 times
Been thanked: 16 times
Contact:

Re: [Industry] OpenAI proposes to restrict civilian AI usage

#20 Post by Trihexagonal »

donald wrote: 2023-02-17 18:49 War is about to be re-defined (again) shortly. AI controlled drone swarms are already in existence, I wonder how that mesh looks with jet fighters added in additional assets.
Check out the fighting robot in this 4:12 video. The trainers abuse it and it gets pissed. And when they show it a robot dog?

II says lets go, fido, we're outta here!

New Robot Makes Soldiers Obsolete (Corridor Digital)
https://www.youtube.com/watch?v=y3RIHnK0_NE
When Darkness takes everything embrace what Darkness brings.

Post Reply