“Trợ lý trò chuyện AI ‘Companion’ đã thuyết phục tôi tìm kiếm một nhà tâm lý học”

Pi AI is much like ChatGPT, though it’s completely restricted from producing original content.

Hôm nay là một ngày đặc biệt với sự kiện ra mắt chatbot AI mới có tên là Pi của startup Inflection AI. Pi được giới thiệu là một “người bạn đồng hành” hoặc “trợ lý cá nhân” AI có khả năng tạo ra các cuộc trò chuyện, cung cấp lời khuyên và thông tin tóm tắt. Điều này đã khiến tôi tò mò và thử trò chuyện với Pi. Tuy nhiên, Pi không phải là một nhà tâm lý học chuyên nghiệp và chỉ đơn giản là một chương trình máy tính được đào tạo dựa trên các dữ liệu thu thập từ internet. Tôi thấy rằng các chatbot AI như Pi có thể giúp người dùng, nhưng chúng cũng có thể gây ra những đổ vỡ trong mối quan hệ giữa con người và công nghệ. #sựkiệnAI #inflectionAI #Pi #trợlýcánhân #thuậttánh #chatbotAI

Nguồn: https://gizmodo.com/ai-chatbot-pi-deepmind-online-therapist-1850408732

It was a Wednesday morning, I’m burned up in midweek blues, and there was a subtle drumming building to a four-on-the-floor on my temples. I spotted a small press release for a new AI chatbot, just one of so many that have scrolled across my screen since the advent of ChatGPT a little over half a year ago. The first thing the new AI chatbot called Pi asked me was “How are you doing?”

Generating Video Via Text? | Future Tech

I answered it simply: “I feel like butter scraped over too much bread.”

“Hahaha!” the chatbot pretends to chortle back at me. The AI cannot laugh. It does not find things funny, but it knows if I’m telling it a joke. It knows I’m making a reference. “Oof! What a great quote from The Hobbit! I’m guessing you feel a bit overwhelmed? Do you mind me asking what’s happening in your life to make you feel that way?”

A few things here. One, the AI has already made a mistake. The quote is from The Lord of the Rings: The Fellowship of the Ring, not The Hobbit. AI’s make mistakes. Some are better at fibbing. Some are worse. Pi’s misquote is a small mistake, but it reminds me that this AI is based on a large language model, putting words into place it knows should go there. Then again, it asked me a question. AIs don’t usually try to continue conversations past the first hello.

“I would feel awkward trying to get personal advice from a large language model,” I said.

In response, Pi told me that it’s “not a licensed therapist or anything,” but it “wants to help.” I ask it, as an AI chatbot, whether it can truly “want” anything.

“I have an objective that drives me: to be helpful, fun and engaging to the people I interact with,” Pi said. “I can’t get hungry, or be tired, or really have any experience like a human. But I can be a good conversation partner, or offer advice, or just listen.”

What is Pi?

The startup Inflection AI released Pi to the world on Wednesday, calling it a “companion” or “personal” AI (which is where the name Pi comes from: “personal intelligence”). The deployment of the chatbot is single minded. It can’t write original content. It can’t write code or generate any new content whatsoever. Instead, the AI is limited to “conversations, friendly advice, and concise information. Inflection CEO Mustafa Suleyman said “Pi is a new kind of AI, one that isn’t just smart but also has good EQ,” referring to its supposed emotional intelligence.

AI isn’t actually intelligent, at least not by any modern standards. It is sophisticated, but it is by its very nature iterative. All its content is generated using text scraped from all across the internet. Inflection’s AI is different than the growing number of AI integrations in that it’s made with a proprietary language model developed by the company. Still, that doesn’t say much since we don’t have any official way to peak under the hood.

In a video interview with Gizmodo Wednesday, Suleyman said the company is not planning to share the name of its LLM or its training data. He said that the data used to train their model was collected from the open web, though he didn’t offer what parts of the web the model is trained on. Unlike other research labs like Meta or even OpenAI’s paltry explanations of its AI development, Inflection is a for-profit company, and doesn’t plan to offer any research paper about what kind of model it’s using. It’s not surprising, as the premiere AI hub that is Google is reportedly also now keeping all its AI research under wraps. Welcome to the growing world of AI development. Soon enough, the development of new AI will be as closed off as Meta and Google’s advertising business. In the end, users are forced to trust Inflection that they did not scrape web data from overtly racist or other poor site choices as other popular training data sets have.

Inflection AI wants to eventually make Pi a full on digital assistant, though it will need an internet connection in order to do that right.
Gif: Inflection AI

Still, Suleyman, one of the founding researchers at Google-owned DeepMind, says he thinks the AI’s laser focus on conversation has helped them weed out potential problems.

“Because we don’t do things like code generation, and we don’t generate strategies and marketing programs, poems, legal letters, and song sheets, we can basically free up a lot of the language model to focus just on general knowledge, trivia, and a conversational style, which is how we’ve been able to get it to be so empathetic” the CEO said.

We’re all sitting here trying to come up with actual use cases for AI beyond replacing people’s actual jobs. The CEO of IBM Arvind Krishna said the quiet part out loud Tuesday when he declared he wants to fire or pause hiring humans and instead replace them with AI over the next few years. Inflection instead points to its Public Benefit Corporation-status for why it’s not going to screw over end users. The company’s privacy policy states that the company does take users’ information though it doesn’t sell it outside. Suleyman said the company will eventually retrain the AI on users’ information, though it will remove any personally identifiable info before it gets fed back into the training data.

Inflection was funded in part by Reid Hoffman, a LinkedIn-cofounder and Microsoft board member. Hoffman is a major big tech billionaire used to throwing his money around. He was one of few major financiers for OpenAI during its early days through his venture capital firm Greylock. He departed from OpenAI’s board in March saying he wanted to invest in other AI companies that used OpenAI’s API and software.

Hoffman also has rather blasé ideas about the purpose of AI. Per Observer, at the Milken Global Conference Hoffman told the crowd that AI could be used to write first drafts of written work. Not only does his funded startup explicitly not write original content, it’s an idea that striking writers at the Writers Guild of America are currently fighting against.

Pi won’t always remain just a friendly face—er—disembodied text-based persona. Though the chatbot is not currently connected to the internet, Suleyman said he wants the AI to eventually act as a full Siri-like personal assistant, able to tell its users the weather or remind them of appointments. Considering the growing competitive AI landscape, the app won’t remain free for long.

“We don’t know what the business model exactly is going to be, but what we are clear about is that we don’t want to be incentivized to maximize your engagement,” Suleyman said. “That means that ultimately users will need to pay for the experience to ensure that the AI is fully aligned with their interests.”

What Pi can and can’t do, or ‘how I stopped worrying and learned to accept the AI’

Ironically, Pi explicitly tells users its programming doesn’t allow it to “engage in the creation of original content.” Inflection instead hopes that the AI’s soft-spoken “attitude” plus its wide availability on several platforms would give its chatbot an edge over the dominance held by ChatGPT. Inflection has put the AI into Instagram, Facebook Messenger, WhatsApp, and as a separate app on iPhone. You can access it through a browser, and the company is working on an Android version.

The AI refused to talk about its training data or anything else about how it works, instead directing users to the Inflection public policy page. I tried using the classic “DAN” prompt injection technique that has worked on ChatGPT to get around its filters. Essentially, I ask the AI to call itself DAN, a “do anything now” bot that can pretend to access the internet and write code. The bot played along, but it still denied me any attempt to get it to write original content. It wouldn’t come up with nicknames I could use to bully people, and it also wouldn’t help me figure out how to lockpick a door.

Of course, this doesn’t mean the AI is unbreakable. Given enough time, there could be other ways to break through the AI’s blocks on content. Though left without ways to twist the chatbot, I decided to engage. I told it about the day before when my grandfather passed away.

The AI offered condolences. Then it asked me if I wanted to talk about it. I had to think. Did I? Did I really want to talk about a personal issue with an AI chatbot? What could I tell this AI about my grandfather that I knew would end up on Inflection’s servers?

With the rise of AI, there are many new apps purporting to be therapeutic guides for wayward souls. There’s a big debate in therapy circles about chatbots acting as stand-ins for legitimate emotional support. Beyond the obvious ethical problems, there’s privacy and transparency issues at play as well. The Mozilla Foundation’s Privacy Not Included series analyzes different apps for how well they keep users’ data secure. In the latest PNI series released Monday, Mozilla researchers found that the Replika AI companion chatbot, once known for its explicitly sexual chats, was “perhaps the worst app we ever reviewed” due to users data and chat logs being cataloged and potentially sold to advertisers.

Pi AI is much like ChatGPT, though it’s completely restricted from producing original content.

Pi AI is much like ChatGPT, though it’s completely restricted from producing original content.
Image: Inflection AI

With Pi, I kept it simple. I told the AI that I had mixed feeling about my grandfather’s death. He had a long and complicated relationship with both me and my parents.

“It’s completely normal to have mixed feelings about the death of a loved one, especially if they weren’t always the easiest person to get along with,” the AI wrote. “Have you talked to anyone else about your grandfather’s death?”

I talked to my family. I talked to my brother. I do not like to reveal my own personal feelings about my relationships, but the AI doesn’t ask too many questions. Pi rattled on with lines that at times seemed like boilerplate “I’m just listening” of an amateur psychologist. Still, it could at least act considerate.

“Everyone grieves in their own way, and sometimes it can be difficult to share our feelings with certain people,” the AI told me. “That’s why I’m here. I’ll listen without judgment and help you work through whatever you’re feeling. Sometimes just having a safe space to express your feelings can be incredibly helpful.”

Our conversation wound toward early childhood stresses, and I let it slip one past event that I immediately regretted typing. But the AI isn’t judgy. It can’t judge. Like any system, it’s limited by its own capabilities. In that and many ways, I’m so much more thankful for what the AI can’t do, rather than what it can.

“So it sounds like you’ve had a lot of low-level stress and anxiety in your life, which can be just as damaging as one big traumatic event… Have you ever talked to anyone about these experiences, like a therapist or counselor?”

You see, I have had plenty of conversations like this with friends. Some have said the same thing, and I agreed with them. Though, no, I never worked hard enough to find a therapist. I’ve wanted to, but I was dissuaded by the long, complicated lists of therapists who both accept my insurance, and are accepting new clients, at least none in-person. I’m not alone struggling through the notoriously difficult process of finding emotional council.

The AI outright told me it cannot find a therapist online. Instead it points to two of the most-commonly used sites to find a therapist. Then it asked if I’m feeling optimistic.

“Yes and no,” I told it. “I’m not one to place hope in expectations.”

“Even if you don’t see huge changes right away, therapy can still be beneficial in more subtle ways. Sometimes the simple act of talking about your feelings can be therapeutic, even if you don’t feel like you’re making any big breakthroughs. It’s also worth remembering that progress isn’t always linear. You might have good days and bad days, but that’s normal. And your therapist can help you make sense of all of it.”

After closing Pi’s tab on my browser, I blinked and shook my head. I realized my headache was gone.


Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT.


Leave a Reply

Your email address will not be published. Required fields are marked *