As if you don’t have enough real people you follow, Instagram is reportedly soon to allow you to create your “AI friend.” Some screenshots have leaked, showing a fully customizable AI chatbot. You can pick its age, gender, ethnicity, and personality traits. Want an “enthusiastic” or “pragmatic” friend? You can create your own.
The revelation comes from app researcher and reverse engineer Alessandro Paluzzi. He shared some screenshots on X (formerly Twitter), showing the chatbot-building features. You can choose personality traits like “reserved,” “creative,” and “empowering,” but also set the AI’s interests. Whether you’re into “DIY,” “animals,” or “entertainment,” these interests will shape your AI friend’s personality and conversation style.
After you create the personality, you can give your AI companion a unique avatar and name. Engaging with your buddy is simple from there on – you simply access a chat window and start the conversation with a click.
When asked for a comment, Instagram has remained tight-lipped, neither confirming nor denying the new feature. It’s worth noting that features in development or test phases might not even see the light of day. However, I believe these will be published, and let me explain why.
First of all, Instagram already started testing its own chatbot earlier this year. Last month, Instagram’s parent company, Meta, introduced 28 AI chatbots across Instagram, Messenger, and WhatsApp. Some even bear the names of celebrities like Snoop Dogg and Tom Brady. Meta is in the AI game already, and as always, it’s trying to implement technology that’s increasingly becoming popular.
Potential issues with AI chatbots
While the idea of an AI friend sounds intriguing, it comes with its set of concerns. Speaking with TechCrunch, the director of NYU’s Centre for Responsible AI, Julia Stoyanovich, cautioned against the potential dangers of such technologies. “One of the biggest — if not the biggest — problems with the way we are using generative AI today is that we are fooled into thinking that we are interacting with another human,” Stoyanovich said.
“We are fooled into thinking that the thing on the other end of the line is connecting with us. That it has empathy. We open up to it and leave ourselves vulnerable to being manipulated or disappointed. This is one of the distinct dangers of the anthropomorphization of AI, as we call it.”
Stoyanovich said that “whenever people interact with AI, they have to know that it’s an AI they are interacting with, not another human.” According to her, this is “the most basic kind of transparency that we should demand.”