For many people online, Lensa AI is a cheap and accessible profile picture generator. However, in the digital art world, privacy and ethics are making headlines with the popularity of artificial intelligence-generated art. concern.
Lensa, which launched as a photo-editing app in 2018, went viral after releasing a “magical avatar” feature last month. It uses a minimum of 10 user-uploaded images and his Stable Diffusion of Neural Networks to generate portraits in a variety of digital art styles. Social media is inundated with portraits of Lensa AI, ranging from photorealistic paintings to more abstract illustrations.The app hit #1 in the iOS App Store’s Photos & Videos category early this month.
But the growth of apps and the rise of AI-generated art in recent months has rekindled the debate over the ethics of creating images using models trained using other people’s original work. increase.
Lensa is controversial — multiple artists defendant Unauthorized use of works spreads stably. Many people in the digital art space have expressed concern that AI models will generate large numbers of images very cheaply. Especially when those images mimic styles that real artists have spent years refining.
For a service fee of $7.99, users will receive 50 unique avatars.
Artist Karla Ortiz says companies like Lensa “bring art to the masses.” “But what they really bring in is counterfeiting, art theft. [and] copy to the masses. ”
Prisma Labs CEO Andrey Usoltsev clarified in an email to NBC News Wednesday that bringing art to the masses “wasn’t part of the company’s mission,” and that it’s been a long time coming to technology like Stable Diffusion. “Democratizing access” was a “very incredible milestone,” he said. .”
“What was once only available to tech-savvy users is now completely accessible to everyone, no specific skills required,” said Usoltsev.
“As AI technology becomes increasingly sophisticated and accessible, it is likely that AI-powered tools and capabilities will be rapidly and broadly integrated into consumer apps. I would like to participate in and guide you to use such technology in a safe and ethical manner.”
Prisma issued a lengthy Twitter thread on Tuesday morning addressing concerns that AI art will replace art by real artists.
This thread did not address the accusations that many artists did not agree to have their work used for AI training.
“Movies didn’t kill theater, accounting software didn’t eradicate the profession. AI won’t replace artists, but it can be a great support tool.” murmured“We also believe that increased accessibility to artificial intelligence-powered tools will only increase the value and appreciation of artificial art for its creative excellence. because it brings
The company says its AI-generated images are “not an exact reproduction of a specific piece of artwork.”
Usoltsev said he could not comment further on the “third-party research and methodologies” used in Stability AI, which developed Stable Diffusion.
For some artists, AI models are creative tools. Some have noted that this model is useful for generating reference images that are otherwise difficult to find online. Other authors have posted about using models to visualize scenes in scripts and novels.The value of art is subjective, but at the heart of the AI art controversy is the right to privacy.
Known for designing concept art for movies such as Doctor Strange, Ortiz also paints fine art portraits. When she realized her art was included in the dataset used to train her AI model, which Lensa uses to generate avatars, she thought it was an “identity violation”. said it felt like
After Prisma Labs uses user photos to train its AI, it will remove user photos from the cloud service it uses to process images, the company said. TechCrunchThe company’s user agreement states that Lensa may use photos, videos and other user content for “the operation or improvement of Lensa” free of charge.
In a Twitter thread, Lensa said it uses “a separate model for each user, rather than a giant one-size-fits-all neural network trained to reproduce any face.” The company also says that each user’s photo and “associated models” are permanently erased from their servers as soon as the user’s avatar is generated.
The fact that Lensa is using user content to further train its AI models, as stated in the app’s user agreement, should surprise the public, the artist told NBC News. says.
Riot Games storyboard artist Jon Lam said: “This thing keeps learning every time people use it. Whenever someone uses it, it gets worse and worse for everyone.”
Image synthesis models such as Google Imagen, DALL-E, and Stable Diffusion are trained using datasets of millions of images. The model learns the association between the placement of pixels in the image and the image metadata. This typically includes a text description of the subject matter and artistic style of the image.
The model can then generate new images based on the learned associations. For example, when I typed in the prompt “a biologically accurate anatomical description of a birthday cake,” the model’s Midjourney generated disturbing images that looked like actual medical textbook material. Reddit user explained The images are “brilliantly bizarre” and “like out of a dream”.
The San Francisco Ballet used images generated by Midjourney to promote this season’s production of The Nutcracker.and press release Earlier this year, San Francisco Ballet chief marketing officer Kim Lundgren said that combining traditional live performance with AI-generated art was “the perfect way to give a holiday classic an unexpected twist.” I was. The campaign was widely criticized by artist advocacy groups.
A spokesperson for the company said the campaign was “a chance to try out today’s technical tools” and that nearly 30 people were involved in its creation.
“In the spirit of Bay Area ingenuity, we tried new things,” the spokesperson said. “SF Ballet is deeply connected to the Bay Area’s diverse arts community and we are proud to be a part of it.”
Ortiz said images like the ones used in the San Francisco Ballet campaign “look great thanks to the non-consensual data we collect from artists and the public.”
He was referring to the Large Scale Artificial Intelligence Open Network (LAION), a non-profit organization that publishes free datasets for AI research and development. His LAION-5B, one of his datasets used to train Stable Diffusion and Google Imagen, contains public images scraped from sites such as DeviantArt, Getty Images and Pinterest.
Many artists have spoken out against LAION-trained models on the grounds that their art was used on set without their knowledge or permission.When an artist uses the site have i been trainedUser can check if her image is included in LAION-5B , she found her face and chart. Arstecnica reported “Photos of thousands of similar patient medical records” were also included in the dataset.
“And now we face the same problem that the music industry faced with websites like Napster: Napster was probably created with good intentions, or it was created without considering moral implications. It was created.”
Artist mateusz Urbanowicz
Artist Mateusz Urbanowicz, whose work was also included in LAION-5B, said a fan sent him an AI-generated image that looked very similar to his watercolor illustration.
It’s clear that LAION is “more than just a research project that someone put up on the internet for everyone to enjoy”, and it’s clear that companies like Prisma Labs use LAION in their commercial products.
“And now we face the same problem that the music industry faced with websites like Napster: Napster was probably created with good intentions, or it was created without considering moral implications. It was created.”
The arts and music industry adheres to strict US copyright laws, but using copyrighted material in AI is legally ambiguous. Using copyrighted material to train an AI model may fall under fair use laws. The Verge reportedArtists have little recourse as they are more complex and difficult to enforce when it comes to content generated by AI models.
“It’s a legal gray area and they’re just exploiting it, so they just take everything away,” Lam said. “Technology always moves faster than the law, and the law is always catching up with technology.”
Usoltsev claimed that Lensa is “fully compliant with GDPR and CCAP.” To his knowledge, “commercial use of the model does not imply a violation of law,” he said.
There is also little legal precedent for prosecuting commercial products that use AI trained on publicly available materials. Lam and others at Digital Art Space Pending Class Action GitHub Copilot is a Microsoft product that uses an AI system trained by GitHub’s public code to pave the way for artists to protect their work. Until then, he said, Lam is completely wary of sharing his work online.
Lam isn’t the only artist who isn’t comfortable posting his work.after him Recent Posts Calls for AI art went viral on Instagram and Twitter, and Lam said he received an “overwhelming amount” of messages asking for advice from students and early-career artists.
The internet has “democratized” art by allowing artists to promote their work and connect with other artists, Ortiz said. Social For an artist like Lam, who has been entrusted with most of his work because of his media presence, posting online is essential to career opportunities. Having a portfolio of work samples on a password-protected site is nothing compared to the exposure you get from sharing it publicly.
“If no one knows your art, they won’t visit your website,” Lam added. “And it will become increasingly difficult for students to get their foot in the door. .”
Adding a watermark may not be enough to protect the artist. recent twitter threadgraphic designer Lauryn Ipsum gave an example of the artist’s signature “grisly wreckage” in a portrait of Lensa AI.
Some argue that AI art generators are no different than aspiring artists emulating other artists’ styles.
Days after illustrator Kim Jung-gi passed away in October, a former game developer AI model Generate images with artist-specific ink and brush styles.creator Said The model was an homage to Kim’s work, but was quickly met with backlash from other artists. He said it was “really disrespectful” to give his life’s work to an AI model.
Urbanowicz says he isn’t too bothered by the actual artists who inspired his illustrations. But an AI model could create an image he “never made” and hurt his brand. Like, for example, when a model was inspired to generate an illustration of him in his style, “a watercolor-painted shop selling drugs and weapons,” he posted the image with a name.
“If someone creates art based on my style and creates a new piece, that is the piece. I freaked out,” he continued. “Enter my name and store [in a prompt] I am forcing the AI to create art that I don’t want to create in order to create new works of art. ”
Many artists and advocates also question whether AI art will devalue works created by human artists.
Lam worries that companies will cancel artist deals in favor of faster and cheaper AI-generated images.
Urbanowicz pointed out that while an AI model can be trained to replicate an artist’s previous work, it can never create art that the artist has not yet created. Without learning from decades of examples, there would never be an AI image that looked like his illustrations, he said. As apps like Lensa AI become more popular, the visual future of his art is uncertain, but hopefully the aspiring artist will continue to pursue a career in the creative field.
“Only that person can create their own art,” said Urbanowitz. “AI cannot make the art they will make 20 years after him.”