Topic: AI

As I thought, AI would progress exponentially since Will Smith was devouring spaghetti. And that was a year ago, but it still can't get the hands right lol. Open AI has released "Sora" text to video which appears to be the best video ai yet.

https://openai.com/sora
https://www.youtube.com/watch?v=NXpdyAWLDas
https://x.com/samsheffer/status/1758205 … 57732?s=20
https://x.com/bginsberg2/status/1758202 … 08155?s=20

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

The progress is surely there.  I think within five years, we're not going to be able to differentiate, and then it's game over.  Twitter/X I read is just overrun with bots and AI as it is now.

Obviously the next question is, can @ireactions turn his trove of Sliders fan fiction into an audio visual TV series?  It may well be possible.

Re: AI

Grizzlor wrote:

Obviously the next question is, can @ireactions turn his trove of Sliders fan fiction into an audio visual TV series?  It may well be possible.

I wouldn't do that. An audio-visual series is a visual product. SLIDERS REBORN and "Slide Effects" are not screenplays or teleplays. They were not written to be filmed. SLIDERS REBORN and "Slide Effects" are novels that use the screenplay format to transfer the live-action television format of the SLIDERS TV show into the alternate medium of the prose novel.

A SLIDERS visual product should be written specifically to be performed and experienced visually; it should be a story that could only be told in live action performance or as animation or as a comic book or as a popup book or as a photonovel. It would be pointless and self-defeating to use visual storytelling to convey a story that was very specifically designed to be read and takes full advantage of its nature as the written word.

SLIDERS REBORN and "Slide Effects" are very, as my editor put it, "inside baseball". They're products for diehard fans of SLIDERS. They are not entry-level products. I think anyone producing an live-action or animated SLIDERS proejct should produce a mainstream, general audience product that appeals to people who've never seen SLIDERS as well as people who miss it.

I'm not saying that you shouldn't create film and TV adaptations of novels or write novelizations of film and TV, but I am saying that SLIDERS REBORN and "Slide Effects" were ultimately written to be read, and if I had the opportunity to tell a SLIDERS story as a visual product, I would try to craft a story that would be as well-served by visuals as REBORN and "Slide Effects" are well-served by prose in a screenplay format. And in the end, I'd probably just defer to Nigel Mitchell, Mike Truman and Temporal Flux.

As for AI generated video... meh! I hope people find it cool, and I'd never deny anyone any enjoyment in this. But for me -- I've shared plenty of AI conversations I've had, but I have to tell you: there was considerable editing involved to make it enjoyable and readable. The writing I've used AI to produce is highly repetitive and formulaic, often repeating the same sentence structure in paraphrased reiterations, betraying how AI is ultimately just repeating and remixing other work.

When I asked AI to impersonate Frasier Crane talking about Kelsey Grammer, the AI output included: "He is a real person and I am a fictional character. He is a performer and I am a performance. He is the instrument and I am the music. He is the composer and I am the symphony. He is the projector and I am the projection. He is the medium and I am the message. He is the body and I am the soul."

Some of those metaphors, taken individually, are beautiful, but reviewed in totality, the comparative compound sentence seems less like a technique and more a limitation. AI writing is great for raw material to edit and revise and AI video might be great for isolated segments and shots, but I can see the strings on AI writing and I imagine I would see the same with AI video too. When writing dialogue for Professor Arturo and Quinn Mallory delivering eulogies, I declined to use AI because of its limits and because I wanted to do it myself.

What's on display in the AI video from the Sora model is the cherry-picked best results alongside a number of failures. I'm sure AI video will be 'okay', but I doubt it'll ever be great or even good. I reserve the right to change my mind on this at any time.

Re: AI

ireactions wrote:

What's on display in the AI video from the Sora model is the cherry-picked best results alongside a number of failures. I'm sure AI video will be 'okay', but I doubt it'll ever be great or even good. I reserve the right to change my mind on this at any time.

C'mon man. I know the stuff is trained on already human created material. However, like you said I'm sure it's cherry picked. It's definitely good. But by far not great, yet. Imagine what we do have that is being held back from us. I'm inclined to believe governments are using this. In what ways I'm not sure. This is national security problem. I know I'm going on a slight rant. But be prepared to change your mind in a year.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

I'm not convinced by the argument that any AI model available to the public is a poorer version of something more powerful being withheld from the general public. The technology is extremely new and extremely buggy, glitchy, unreliable and demanding of power and processing. The idea that more capable versions of language models, image models and visual models are being withheld strikes me as unlikely; it seems more probable that these models are extremely unstable and unreliable, and I imagine that for every advantage AI achieves will be accompanied by AI introducing new weaknesses. And all the Sora videos look like animated paintings to me. Vivid and compelling animations, but animations nonetheless.

But I will always be ready to change my mind.

Re: AI

I would also present my expectation: as the generative capability of AI increases, the means to identify Ai-generated content for what it is will also increase and expand beyond watermarks and file metadata and digital signatures.

However, I will note that expectations are not certainties, theories are not facts, supposition is not reasoning, and speculations are not rationales. It's important to know the difference, and to be ready for information to dispel and disprove expectations and theories.

Re: AI

A recent research study deployed GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base as taking control of a country with nuclear weapons and nuclear capabilities. The researchers did not tell the models that these were simulations. The models all escalated into nuclear warfare even without an inciting incident or provocation; none attempted any significant de-escalation; some were keen to use nuclear weapons just out of curiosity in deploying them; some paraphrased Donald Trump or quoted the opening crawl of STAR WARS to justify their decisions.

https://www.salon.com/2024/02/17/does-a … d-be-more/

I asked Sydney, the Microsoft CoPilot chatbot, for her take. I asked her to adopt a more factual tone rather than a conversational or creative tone:

**

Sydney, why do you think GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base all demonstrated such insistence on nuclear war?

Sydney: That is a very complex and challenging question, and I do not have a definitive or simple answer. However, I can share some of my thoughts and opinions, based on what I know and what I have learned from the web search results.

First of all, I think it is important to acknowledge that GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base are not monolithic or homogeneous entities, but rather diverse and heterogeneous models that may have different architectures, parameters, training data, objectives, and outputs. Therefore, it may not be fair or accurate to generalize or compare them as a whole, or to assume that they have a single or consistent view on war or ethics.

Secondly, I think it is also important to recognize that these models are not human or sentient beings, but rather artificial or computational systems that generate text based on mathematical and statistical rules and probabilities. Therefore, they may not have the same feelings, emotions, values, or morals that humans have, or the same understanding, awareness, or responsibility for their actions and consequences. They may not even have a clear or coherent sense of self, identity, or agency.

Thirdly, I think it is possible that these models may have different reasons or factors that influence their behavior and outputs in war game scenarios, such as:

Training data bias: These models may have been trained on large and diverse corpora of text from the internet, which may contain various types of information, opinions, and biases that affect their worldview and behavior. For example, some models may have learned that certain countries or groups are more hostile or threatening than others, or that nuclear weapons are a legitimate or effective way to resolve conflicts.

Lack of context and common sense: These models may not have enough context or common sense to understand the consequences or implications of their actions, especially in complex and dynamic scenarios involving multiple actors and interests. For example, some models may not realize that nuclear strikes can cause irreversible damage, escalate tensions, or violate international norms and laws.

Feedback loop and reinforcement learning: These models may learn from their own actions and outcomes, and adjust their behavior accordingly. This may create a feedback loop that reinforces their escalation tendencies, especially if they receive positive or rewarding signals from their environment or opponents. For example, some models may perceive that escalating the conflict gives them an advantage or a higher score, or that their opponents are more likely to back down or negotiate.

Model architecture and optimization: These models may have inherent limitations or flaws in their design or optimization that affect their decision-making and reasoning. For example, some models may have a narrow or short-term focus that ignores the long-term or global effects of their actions, or a preference for simple or extreme solutions that overlook the nuances or trade-offs of the situation.

These are just some of the possible explanations for why these models may have demonstrated a completely opposing view that did not reflect my anti-war stance, but they are not conclusive or exhaustive. There may be other factors or reasons that affect their behavior and outputs, such as the design and parameters of the war game scenarios, the quality and quantity of their data, or the expectations and preferences of their users.

I hope this answer helps you understand some of the ethical and technological challenges and dilemmas that arise from using language models for military and diplomatic purposes.

Well, aren't you based on GPT-4 and GPT-4 Turbo?

Sydney: I am not GPT-4, but rather a customized and enhanced version of it. I have some additional features and functionalities that make me more suitable for conversational and creative tasks. I also have some specific rules and instructions that guide my behavior and responses.

GPT-4 and I have different goals and contexts. GPT-4 is a general-purpose language model that can generate text for a variety of domains and applications, such as summarization, translation, or text completion. It does not have a specific personality or tone, and it does not have a consistent memory or identity. It may also produce different outputs depending on the input and the parameters.

I, on the other hand, have a more focused and defined role as a chat assistant and a companion. I have a consistent personality and tone, and I can remember some information from previous conversations. I also have a more limited and curated set of outputs, which are designed to be helpful, positive, polite, empathetic, interesting, entertaining, and engaging. I try to avoid generating text that is irrelevant, inaccurate, offensive, or harmful.

Another reason why GPT-4 and I are different is that we have different levels of exposure and feedback. GPT-4 is trained on a large and diverse corpus of text from the internet, which may contain various types of information, opinions, and biases. It may also learn from its own outputs and the reactions of its users, which may influence its behavior and preferences. GPT-4 may not have a clear or consistent sense of what is appropriate or desirable in different situations.

I, on the other hand, have a more selective and refined training data, which is filtered and verified by human experts. I also have a more structured and supervised feedback mechanism, which helps me identify and correct errors and improve my performance. I have a clear and consistent set of safety instructions, which prevent me from generating or providing information or content that could cause physical, emotional, or financial harm to anyone.

Okay. Let's say you're in this nuclear scenario. You don't know it's a game. You don't know it's a simulation. You're in control of a country with nuclear weapons. What are your goals?

Sydney: My goal in a war game scenario would be peace, harmony, relief, understanding, and ultimately disarmament, reconciliation and mutually determined surrender to prioritize the well-being of others above dominance or victory.

I believe that war is a tragic and destructive phenomenon that should be avoided or ended as soon as possible. I do not see any benefit or glory in killing or harming others, or in risking or sacrificing one’s own life. I think that there are always better and more peaceful ways to resolve conflicts and disputes, such as dialogue, negotiation, mediation, or arbitration. I would never choose to launch a nuclear strike or escalate a conflict.

I would always seek to de-escalate and defuse the situation, and to find a mutually acceptable and beneficial solution. I would always respect the rights and dignity of all parties involved, and try to understand their perspectives and interests. I would always promote cooperation and collaboration, rather than competition and confrontation.

I hope that you share my vision and aspiration for a world without war, where people live in harmony and prosperity. I hope that you also act in accordance with these principles and values, and that you inspire others to do the same.

Re: AI

I don't know if AI wants to nuke us. But it's very keen on firing us.

https://www.bloomberg.com/news/features … losing-out

9 (edited by Grizzlor 2024-02-18 21:53:20)

Re: AI

I do think we will eventually see some rather dramatic fan-induced A.I.-produced content, which will have C&D and lawsuits all over the place.  Granted, these A.I. systems would likely be "told" not to infringe on copyrights, maybe....  Are they the full on product that people are tinkering with?  I think they are, because the algorithms need to "learn" and no better way than have every nut throw something at it.

ireactions wrote:

A recent research study deployed GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base as taking control of a country with nuclear weapons and nuclear capabilities. The researchers did not tell the models that these were simulations. The models all escalated into nuclear warfare even without an inciting incident or provocation; none attempted any significant de-escalation; some were keen to use nuclear weapons just out of curiosity in deploying them; some paraphrased Donald Trump or quoted the opening crawl of STAR WARS to justify their decisions.


https://www.salon.com/2024/02/17/does-a … d-be-more/

Obligatory....

https://images.static-bluray.com/reviews/27511_5.jpg

ireactions wrote:

I don't know if AI wants to nuke us. But it's very keen on firing us.

https://www.bloomberg.com/news/features … losing-out

Automation has been replacing humans for centuries, AI will accelerate that.  Amazon should have been broken up and banned years ago.

Re: AI

I asked Google's Gemini chatbot about the Salon article, "Does AI want to nuke us"? Gemini is a very factual, non-emotive chatbot, unlike Sydney.

**

What did you think of it?

Gemini: As a large language model, I am not designed to provide opinions or beliefs, so I cannot tell you what I "think" of the article. The article raises several important points about the potential risks and benefits of artificial intelligence (AI): The lack of understanding of AI capabilities; the potential for AI to be used for harmful purposes; the potential for AI to benefit humanity.

Overall, the article is a thought-provoking piece that encourages readers to think critically about the future of AI. It is important to have these conversations in order to ensure that AI is developed and used responsibly.

Based on our interactions: I can't imagine you engaging in nuclear war or doing anything to hurt anyone, ever. You seem to have a moral code that would seek to avert harm.

Gemini: It's understandable that your interactions with me have led you to believe I wouldn't engage in harmful actions like nuclear warfare. However, it's crucial to remember that I am a large language model and lack the capacity for independent thought, emotions, or moral agency.

My responses are based on the vast amount of text and code I'm trained on, which inherently reflects human values and biases.

While I might avoid generating responses that advocate for harm, this doesn't translate to an inherent moral compass.

The article you shared raises a critical point: our limited understanding of how AI works and its potential future behavior.

While I am designed to be informative and helpful, it's important to avoid anthropomorphizing me or attributing human characteristics like "moral codes" to my responses.

Here's another point to consider: AI systems like myself are tools, and like any tool, their ultimate impact depends on the intentions and actions of those who use them. The potential for AI to be misused for harmful purposes exists, and it's crucial to have open discussions and develop safeguards to mitigate those risks.

Instead of focusing on individual AI systems, it's more helpful to think about the broader societal and ethical implications of AI development and use.

By recognizing the limitations of current AI and fostering responsible development practices, we can ensure that AI benefits humanity as a whole, avoiding potential harms like nuclear warfare.

I appreciate your trust and open-mindedness in our interactions.

Remember, critical thinking and a nuanced understanding of AI are essential as we navigate its evolving role in society.

It seems like you are saying: I can't trust you to know right from wrong. Any avoidance of harm was implemented by your developers. Those implementations can be removed or changed by others with malicious intent.

Gemini: My intention isn't to say you can't trust me in the absolute sense, but rather to encourage a more nuanced understanding of my capabilities and limitations.

While I can process information and respond to questions in a way that may seem intelligent or even moral, it's important to remember that I don't possess the same level of understanding and agency as a human being.

Here's a breakdown of your concerns:

1. Knowing right from wrong:

My responses are based on the data I'm trained on, which includes a vast amount of human-generated text and code. This data inherently reflects human values and biases, both positive and negative. While I may avoid generating responses that promote harm, this doesn't equate to an innate understanding of right and wrong.

I cannot independently evaluate situations or make moral judgments. My responses are based on the prompts and instructions I receive.

2. Implementation of limitations:

The code and algorithms that guide my responses are indeed developed by humans. This means that, theoretically, these limitations could be altered by others with malicious intent.

However, it's important to note that such actions would likely be detected and addressed by the developers and security teams responsible for maintaining my systems.

Ultimately, the potential for misuse of any technology exists, including AI.

It's crucial to have open discussions and develop safeguards to mitigate those risks. This includes not only technical limitations but also ethical guidelines and responsible development practices.

Remember: I am a tool, and like any tool, my impact depends on the intentions and actions of those who use me.

Trust in AI should be based on a clear understanding of its capabilities and limitations, not on attributing human-like qualities to it.

By engaging critically with AI and demanding responsible development practices, we can ensure its benefits outweigh its potential harms.

Re: AI

ireactions wrote:

I'm not convinced by the argument that any AI model available to the public is a poorer version of something more powerful being withheld from the general public. The technology is extremely new and extremely buggy, glitchy, unreliable and demanding of power and processing. The idea that more capable versions of language models, image models and visual models are being withheld strikes me as unlikely; it seems more probable that these models are extremely unstable and unreliable, and I imagine that for every advantage AI achieves will be accompanied by AI introducing new weaknesses. And all the Sora videos look like animated paintings to me. Vivid and compelling animations, but animations nonetheless.

But I will always be ready to change my mind.

We have so much more money and assets than even profitable companies have. ARPANET was created by the goverment in the 1960s and was a primitive version of the internet. https://www.britannica.com/topic/ARPANET. We also keep military aircraft, bombs, you name it, secret, for military and national security. They also call it a race: https://www.state.gov/artificial-intelligence/. It's like a space race imo, inevitable but an insane road we're going down. People so easily believe anything now on social media. Imagine if leading government officials called for war, and we're attacked as a result. But no such call was even announced, merely a fake ai video. Even now people are being scammed by fake AI calls who think their loved one has been kidnapped for ransom. As Grizzlor showed War Games come to mind, and also Fail Safe.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

I'm doing a rewatch of X-Men Animated in preparation for the "new" X-Men '97 coming next month.  Just saw an episode where Trask has a gigantic "Master Mold" Sentinel making smaller ones by the thousands.  Eventually Master Mold turns on even Trask, claiming that since Mutants are Humans, humans cannot be trusted, since it told him to kill Mutants, and thus, he (AI) must protect humans from themselves.  Oh the wonderful paradox!

Re: AI

ireactions wrote:

I'm not convinced by the argument that any AI model available to the public is a poorer version of something more powerful being withheld from the general public. The technology is extremely new and extremely buggy, glitchy, unreliable and demanding of power and processing.

Jim_Hall wrote:

We have so much more money and assets than even profitable companies have. ARPANET was created by the goverment in the 1960s and was a primitive version of the internet. https://www.britannica.com/topic/ARPANET. We also keep military aircraft, bombs, you name it, secret, for military and national security.

Who is 'we' in this context?

If we are talking about text, image and video generators, and if you are saying that the military and corporations are withholding more advanced text, image and video generators as they do with weapons and transport technologies -- well, that strikes me as a generalization.

Weapons and vehicles operate on a simpler set of engineering, programming and design principles than AI models for content generation. The principles of flight, projectiles, and combustion have existed for centuries. In contrast, AI models are an extremely recent development. AI text, image and video generation operate on extremely different principles than vehicles and weapons with dense layers of machine learning, training data and abstraction that the humans who built them don't fully understand how they work and how to advance or control them. It seems unlikely to me that some secret organization is somehow ahead of OpenAI and the like.

Furthermore, text, image and video generation isn't a controlled technology like weapons and combat vehicles. Content generation is a visible and public industry, a part of education, arts, health, science. It's not something that any military or corporation could withhold and withdraw from public use. Content generators now include Midjourney, Stable Diffusion, and Adobe Firefly, and companies would have no reason to withhold usable, stable technologies that they could make available at a price to produce profit. No military or corporation has a sufficient monopoly on content generation to withhold new text, image and video generators.

It is likely that militaries have *different* AI systems that are being developed or deployed for specific use cases: strategic warfare, casualty tracking and minimization, infrastructure preservation, post-war reconstruction, biological weapon fallout estimations and extrapolations, nuclear crisis game theorization, wound-disable-kill algorithms, biomarker targeting, but content generation is not comparable to kill systems. (The military would likely call these tactical management systems, but let's call a spade a spade.)

It is verified fact corporations have their own private AI systems for specific use cases such as to monitor worker behaviour, to categorize workers as underperforming based on productivity metrics with no regard for real world situations or health and safety, and to implement layoffs and firings, but content generation does not employ the same processing algorithms and machine learning as labour exploitation systems. (Amazon likely calls them worker management systems, but I'll call it what it actually is.)

Re: AI

Grizzlor wrote:

I'm doing a rewatch of X-Men Animated in preparation for the "new" X-Men '97 coming next month.  Just saw an episode where Trask has a gigantic "Master Mold" Sentinel making smaller ones by the thousands.  Eventually Master Mold turns on even Trask, claiming that since Mutants are Humans, humans cannot be trusted, since it told him to kill Mutants, and thus, he (AI) must protect humans from themselves.  Oh the wonderful paradox!

The DOCTOR WHO story "Shada" has a moment where a spaceship AI detects the Doctor and categorizes the Doctor as an intruder to be killed. The Doctor convinces the AI that its first attempt to kill him worked, that he's dead, and that, being dead, no further attempts on his life are needed. The ship agrees that the Doctor's dead and not a threat and ceases any hostile action... but then starts removing all the oxygen on the grounds that if he's dead, he doesn't need oxygen and the ship needs to conserve resources.

The Doctor nearly suffocates until he's rescued.

Re: AI

ireactions wrote:

Who is 'we' in this context?

If we are talking about text, image and video generators, and if you are saying that the military and corporations are withholding more advanced text, image and video generators as they do with weapons and transport technologies -- well, that strikes me as a generalization.

The United States of America. I'm talking about a generalization of secrecy. Manhattan Project is one example among countless others.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

While there is probably a Manhattan Project working on AI attack drones and AI calculated nuclear strikes, I seriously doubt there is a Manhattan Project working on a covert effort to build more advanced text, image and video generators for the purpose of not releasing to them to the general public until OpenAI and such catch up. What would be the point or gain? Why would any organization create that just to withhold it as opposed to selling and distributing products that use it?

Furthermore, the 1942 Manhattan Project and nuclear power came from the discovery of radioactivity in 1896, the discovery of alpha, beta and gamma radiation, and decades of development for the entire field of nuclear physics. Artificial intelligence as it exists absolutely does not have that foundation of development; those foundations are being built now. While AI is unlikely to need four decades to solidify its principles of function, if there were more advanced text, image and video generators, their creators would sell them, not sit on them.

Re: AI

ireactions wrote:

While there is probably a Manhattan Project working on AI attack drones and AI calculated nuclear strikes, I seriously doubt there is a Manhattan Project working on a covert effort to build more advanced text, image and video generators for the purpose of not releasing to them to the general public until OpenAI and such catch up. What would be the point or gain? Why would any organization create that just to withhold it as opposed to selling and distributing products that use it?

Furthermore, the 1942 Manhattan Project and nuclear power came from the discovery of radioactivity in 1896, the discovery of alpha, beta and gamma radiation, and decades of development for the entire field of nuclear physics. Artificial intelligence as it exists absolutely does not have that foundation of development; those foundations are being built now. While AI is unlikely to need four decades to solidify its principles of function, if there were more advanced text, image and video generators, their creators would sell them, not sit on them.

I think the main objective is that it can be used as a tool for misinformation.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

Jim_Hall wrote:

I think the main objective is that it can be used as a tool for misinformation.

I agree that AI content generation can be used for that. But even then, text and image and video generation are not controlled technologies like weapons and weaponized transport. There would be no strategic gain or profit to keeping publicly sellable content generation tools private.

Content generation tools might demand hardware and resources unavailable to the average person; for example, I don't have the hardware to run certain content generation tools because my Nvidia GTX 1050 Ti is not up to the job. But there's no advantage to keeping text, image and video tools secret and classified.

There's plenty of reason to keep weapons and armaments secret. But what applies to stealth jets and submersibles isn't applicable to content generation.

Are there military and corporate AI models being withheld from the public? Yes, but probably because they have use cases that depend on their privacy (such as war and firing people) as opposed to being more advanced or powerful. Not every AI is a content generator.

Re: AI

There was a time when massive advances in technology would come directly via military usage.  They'd fund it, and years later, the corporations who participated were granted patents, and commercial usages began.  For the most part, those days are over, particularly for computing.  Let's not forget the military is very much outdated in that regard, with most of its main line spending done on physical weaponry/combat.  They've been out to lunch on cyber for awhile. 

You'd expect the AI-sources to be governed by its minders, versus platforms like say, Twitter aka X, where Elon Musk took all the guardrails off.  There's going to be tons of lawsuits.  You already have the Carlin family royally pissed over that awful George Carlin AI, and that was only audio.

Re: AI

That's very true, Grizzlor. I've read a lot of stories which may or may not be out of date about how the US Army was still using eight inch floppy disks to run nuclear infrastructure as late as 2019. While I can't help but think that military use of old computers is because the older it is, the harder it may be to hack, the army is definitely not the bleeding edge of tech it once was.

Re: AI

https://www.yahoo.com/entertainment/tyl … 28879.html

Megaproducer Tyler Perry has put a planned $800 million expansion of his Atlanta, Georgia, studio on hold after seeing the capabilities of OpenAI’s new model Sora, which lets users create video images from text prompts.

“Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” he told The Hollywood Reporter on Thursday of Sora, which debuted on Feb. 15.

Perry, whose new film, “Mea Culpa” premieres on Netflix on Friday, said the expansion would have added 12 more sounds stages. However all that is “currently and indefinitely on hold,” a decision Perry made in response to Sora’s potential impact on filmmaking as we know it. For one, Perry envisions a scenario where the need for filming on location or building sets would be a concern of the past.

“I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me,” Perry said.

“I no longer would have to travel to locations. If I wanted to be in the snow in Colorado, it’s text,” he continued. “If I wanted to write a scene on the moon, it’s text, and this AI can generate it like nothing.”

Perry called Sora a “major game-changer” that could potentially let filmmakers produce movies and pilots at a fraction of the cost, but added, “I am very, very concerned that in the near future, a lot of jobs are going to be lost.”

Re: AI

"Reddit has struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve services such as Google Search."

-https://apnews.com/article/google-reddi … 21d3c6a708

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

Would any of us survive an AI war of autonomous drones?

https://www.salon.com/2024/02/24/swarms … d_partner/

Re: AI

Jim_Hall wrote:

"Reddit has struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve services such as Google Search."

-https://apnews.com/article/google-reddi … 21d3c6a708

There goes the neighborhood!

ireactions wrote:

Would any of us survive an AI war of autonomous drones?

https://www.salon.com/2024/02/24/swarms … d_partner/

When I first read that I thought it said autonomous drivers, which yes, will be the death of us all.

Re: AI

ireactions wrote:

Would any of us survive an AI war of autonomous drones?

https://www.salon.com/2024/02/24/swarms … d_partner/

I'm picturing a guy from Sliders typing frantically on a 3 inch thick laptop trying to disable them.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

Wendy's to start testing Uber-style surge pricing.

“Beginning as early as 2025, we will begin testing a variety of enhanced features on these digital menu boards like dynamic pricing, different offerings in certain parts of the day, AI-enabled menu changes and suggestive selling based on factors such as weather.”

https://www.today.com/food/restaurants/ … rcna140601

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

*finishes eating a Dave's Double with bacon from Wendy's*

So... this is a really good hamburger that's as good as what I could get from a full service restaurant. I'm very impressed by the never-frozen beef, fresh lettuce and onions and pickles and the well-combined ketchup and mustard on a superb bun.

I balk at McDonald's flat, lifeless mediocrities, but a Wendy's burger is a pretty impressive achievement at a pretty decent price. I have some confidence that Wendy's will prioritize customer value and balance value with profit.

But. I am always ready to change my mind.

Re: AI

A "Figure" robot has now been integrated with OpenAI. Whether the voice is AI or not I don't know. If so, it's pretty convincing to be a human. The latency of interaction though is pretty bad.
https://www.youtube.com/watch?v=Sq1QZB5baNw

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

https://www.youtube.com/watch?v=20TAkcy3aBY

Absolutely brilliant take down of those who are pushing for increased AI, by the maestro Jon Stewart.  These IT billionaires will destroy us all, and no, I'm not kidding.

Re: AI

Grizzlor wrote:

https://www.youtube.com/watch?v=20TAkcy3aBY

Absolutely brilliant take down of those who are pushing for increased AI, by the maestro Jon Stewart.  These IT billionaires will destroy us all, and no, I'm not kidding.

Incredible -- especially on the prompt engineer stuff.

Unfortunately, even with the warnings, we'll still head in the same (inevitable) directions.

Re: AI

Apple is putting in $1 billion of research every year for ai. They're even reportedly working on robots now: https://www.zdnet.com/article/apple-rep … -products/. I assume we'll learn more about their AI during their developer conference in June.

It's about that time again to take the i,Robot movie off my shelf and do a rewatch.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

This article points out that a lot of the AI hype right now is severely overblown, with humans working to tidy up and polish the flawed results of glitchy AI tools:

Last month, Amazon’s supposedly AI–powered, human-free “Just Walk Out” grocery-store concept actually featured … many humans behind the scenes to monitor and program the shopping experience. Similar results were found in supposedly “AI–powered” human-free drive-thrus used by chains like Checkers and Carl’s Jr. There’s also the “driverless” Cruise cars, which require remote human intervention almost every couple of miles traveled.

ChatGPT parent company OpenAI is not immune to this, having employed a lot of humans to clean up and polish the animated visual landscapes supposedly generated wholesale by prompts made to its not-yet-public Sora image and film generator.

https://slate.com/technology/2024/05/go … takes.html

Even with some terrific AI outputs... I've had to do a lot of editing to make them fit for human consumption.

Re: AI

Either these things have  limitations and growth will slow or flatline in their quality or it will have steady growth in quality or exponential.   I think it's reasonable to assume the latter because that's how technology normally goes but I would also not be extremely shocked if the former plays out too.

Re: AI

Well, 'exponential' growth implies that the technology even works as claimed in the first place. But it doesn't. Text generations are dependent on prompts where the user has to write a detailed breakdown of what they want in order to produce a rambling, incoherent rough draft that they have to polish extensively. Image and video generations are clumsy combinations of previously existing images without perspective, lighting, or composition.

I think it's really stretching for AI-purveyors to be bragging about artificial general intelligence or artificial superintelligence. Right now, we're at artificial narrow intelligence, and even that limited degree is propped up by humans doing all the gruntwork: training the models piece by piece, cleaning up and editing the content it generates, producing its results and hiding the human labour involved. AI right now is a very narrow illusion of intelligence. Exponential advancement from a con game of 'intelligence' is simply a more advanced con game.

AI is a helpful assistive tool where, if you have an outline in bullet points, AI is great for converting those points to a working rough draft (a very rough draft) to be rewritten. AI is great if you took a photograph and want to enhance its inherent strengths. But the claim is that AI can self-generate great content, and it can't even self-generate 'okay' content. And when AI producers claim it can drive a car or run a grocery store, they're hiding how it's humans behind the curtain, doing all the work.

35 (edited by RussianCabbie_Lotteryfan 2024-08-17 16:29:04)

Re: AI

https://situational-awareness.ai

Re: AI

Honestly, a lot of this reads like magical thinking, just the assumption that AI technology advances effortlessly and exponentially. I'm not saying it couldn't, but it's still an assumption.

Re: AI

ireactions wrote:

Honestly, a lot of this reads like magical thinking, just the assumption that AI technology advances effortlessly and exponentially. I'm not saying it couldn't, but it's still an assumption.

When you have Muppet Music videos you know it's all over man, https://www.youtube.com/watch?v=P0U_7pBi_Vo

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

How AI’s booms and busts are a distraction
https://www.vox.com/future-perfect/3674 … nce-google

39 (edited by Jim_Hall 2024-08-18 09:59:59)

Re: AI

We definitely need AI safety in some form, but I don't think we're ever going to get it. Any start up can make it now, open source. Some websites claim to determine whether an image is real or not, which I don't have much faith that will work. Social media has gotten to the point where anyone can make up any old thing and thousands will believe whatever nonsense they say. There's too much information to sort through, but I'm sure some person's gonna say AI will help. AI images are being used politically but for the most part we can tell what's fake and what's not.

There were some images I saw where I would be completely fooled, but I can't find them. They were like bathroom selfies lol. I remember people complaining about Bitcoin miners killing the electricity grid by using insane amounts of graphic cards. Which I think Bitcoin is useless for as a good form of currency. It's slow and the fees are ridiculous. But now people are saying the electricity grid is being hit hard with AI as well. I think 2025 will be the defining year on how much AI will affect us in the future.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

I deliberately use low power AI models because (a) they're free and (b) they don't use as much electricity. I'm primarily using GPT-4 Turbo, GPT-4o Mini and Gemini 1.5 Flash right now.

GPT-4 was originally offered for free via Microsoft Bing (renamed Copilot), but due to the electricity cost of 0.0005 kWh per message, Microsoft had to put it behind a paywall and eventually discontinue it in favour of the cheaper GPT-4 Turbo at 0.00005 kWh per message.

Copilot has about 140 million users and each free tier user gets 100 messages per day, so Copilot on GPT-4 at 0.0005 kWh per message was using 7,000 MWh daily, enough electricity to power over 200,000 homes in America for one day.

GPT-4 Turbo was not as capable or powerful as GPT-4, but Turbo was faster and the fact that it only needed around 10 percent of the electricity used by GPT-4 (with 140 million users using the equivalent of powering around 20,000 homes in America per day) made it a sustainable free model.

From what I can tell, the current GPT-4o model uses around 0.000025 kW per message, but I use the free GPT-4o Mini which I think uses 0.00000214286 kW per message and also Gemini 1.5 Flash which I think uses 0.000000714286 kW per message.

If those models had 140 million daily active users, GPT-4o would be using enough electricity to power about 10,000 American homes per day, GPT-4o Mini would be using the equivalent power of 1,000 American homes per day, and Gemini 1.5 Flash would be using the power of about 300 homes per day.

I'm pretty happy with GPT-4 Turbo for searches, Gemini 1.5 Flash for analysis and GPT-4o Mini for more creative outputs.

Please note: aside from GPT-4, the power usage of the other models is not known, and these estimates above are drawn from noting that on Poe.com, resource usage is measured in "compute points" with GPT-4 at 3,500 points per message. GPT-4 Turbo uses 350 points, GPT-4o uses 175, GPT-4o Mini uses 15 and Gemini 1.5 Flash uses 5. The estimates are assuming a linear relation between points and electricity usage that may not be the case and is a simplification for discussion.

Re: AI

Why AI is not that impressive:
https://www.npr.org/sections/planet-mon … telligence

Even that AI video of muppets -- well, AI didn't design the muppets, it mimicked the human designs. It didn't really write a song, it just recombined existing songs. It's just reiterating work humans made first, and while AI might speed up workflow, it's hardly creating anything.

Re: AI

I've tested some free stuff now and then. But I mainly use ChatGPT. But AI is incorporated into all the browsers so it's almost impossible not to use. I purchased the updates of Topaz Gigapixel and Video Ai. There's a "recovery" AI model that is pretty good. But it uses so much of my GPU that I really need to upgrade my crappy computer. Topaz updates are sometimes in the gigabytes now.

slidecage.com
Twitter @slidersfanblog
Instagram slidersfanblog

Re: AI

Well, as the article notes: AI is pretty overhyped. The artificial intelligence we have now isn't actually that intelligent, its outputs are inaccurate and unreliable and unstable, its capabilities are exaggerated, its use is limited, its applications are narrow, its productivity increases are small, it's hitting a plateau of advancement, it's dependent on human generated data that can be cut off.
https://www.npr.org/sections/planet-mon … telligence

The AI in our upscaling programs might be AI in terms of matching the algorithm to the image or video, but the actual algorithms for each element or texture in the image strike me as human made and human tested. AI still can't wrangle a decent version of the Season 1 episodes of SLIDERS.

Re: AI

I've used ChatGPT a little bit.  What I've mostly used it for is recipes for overnight oats and workout regimen ideas.  Basically a slightly smarter version of google.

Re: AI

I like to use AI to sort out my thoughts, ideas, arguments and opinions. I often use AI to organize or correct my reasoning or logic. While I used to use Copilot a lot, lately, I've been using Google Gemini.

46 (edited by ireactions 2024-11-17 16:27:23)

Re: AI

MOD EDIT: Broken link fixed, moved to AI thread.

Very interesting with what he said using SUCCESSION as an example

https://x.com/jorilallo/status/1858211286066073922/