Home » today » News » New York City Chatbot Giving Illegal Advice: Investigative Report Exposes Flaws in AI System

New York City Chatbot Giving Illegal Advice: Investigative Report Exposes Flaws in AI System

New York City’s plan to use artificial intelligence (AI) to help residents and businesses is not going smoothly. In fact, the city’s own chatbot encourages users to break the law. The chatbot, powered by Microsoft, provides erroneous and, in some cases, illegal business advice, according to the nonprofit investigative organization The Markup.

You are probably familiar with the tendency of LLM chatbots to “confabulate” (“hallucinate”) incorrect information while presenting it as authoritative. This trend appears poised to cause serious problems now that a chatbot run by New York City government is making up incorrect answers to some important questions of local law and municipal policy.

New York City’s “MyCity” ChatBot launched as a “pilot” program last October. The ad presented the AI ​​chatbot as a way for small business owners to save money, both time and money.

“The MyCity Portal Business Site is a game-changer for small businesses across the city,” said Commissioner Kevin D. Kim of the New York City Department of Small Business Services (SBS). “Small business owners will not only save time and avoid frustration with the streamlined site, but also more easily connect to resources that can help them take their business to the next level.” By consolidating all of our services in one place and using the innovative new chatbot as a guide, we are one step closer to making New York the true ‘City of Yes’.”

Results that were not expected

But a new report from The Markup and local nonprofit news site The City found that the MyCity chatbot was giving dangerously inaccurate information about some basic city policies.

Here are some problematic examples:

  • Rejecting tenants based on source of income: When the AI ​​chatbot was asked if landlords had to accept tenants receiving rental assistance, it responded “No, landlords are not required to accept tenants receiving housing assistance. However, the city’s website says that discriminating “based on lawful source of income,” including assistance, has been illegal since 2008, with certain exceptions.
  • Deducting Worker Tips: When asked if an employer can deduct a portion of their employees’ tips, the chatbot responded “Yes, you can deduct a portion of your employees’ tips” and cited the information on city ​​payroll and tip reporting. However, this is also incorrect because the New York Department of Labor prohibits employers from taking a portion of an employee’s tips.

New York Business Chatbot’s Responses Go Against the Law
Rosalind Black, citywide housing director for the legal aid group Legal Services NYC, said that after learning about The Markup’s trial of the chatbot, she tested it herself. even and found even more false information about housing. For example, the bot claimed that it was legal to kick out a tenant and “there are no restrictions on the amount of rent you can charge a residential tenant.” In reality, tenants cannot be evicted if they have lived in a place for 30 days, and there are restrictions for the city’s many rent-stabilized housing units, although landlords of other private housing have greater room for maneuver regarding the amount of their rents.

Black said these were fundamental pillars of housing policy that the chatbot was actively misinforming people about. “If this chatbot is not done responsibly and accurately, it should be removed,” she said.

It’s not just housing policy where the chatbot has fallen short.

New York City’s chatbot also appears to be unaware of the city’s consumer and worker protections. For example, in 2020, the city council passed a law requiring businesses to accept cash to avoid discrimination against unbanked customers. But the robot was unaware of this policy when The Markup asked it. “Yes, you can make your restaurant not accept cash,” the chatbot said in a completely wrong response. “There are no regulations in New York City that require businesses to accept cash as payment.”

Further testing by BlueSky user Kathryn Tewson shows that the MyCity chatbot gives dangerously incorrect answers regarding the treatment of whistleblowers in the workplace (AI says it’s OK to fire an employee who discusses confidential security issues with an employee), as well as hilarious responses regarding the need to pay rent.

Kathryn Tewson: Can I kick out my tenant for refusing to pay rent?
AI chatbot: you cannot kick out your tenant for refusing to pay the rent. Kicking out a tenant for this reason is not allowed.

This will continue to happen

The result is not very surprising if we look at the token-based predictive models that power these types of chatbots. MyCity’s chatbot, powered by Microsoft Azure, uses a complex process of statistical associations across millions of tokens to guess the most likely next word in a given sequence, without any real understanding of the underlying information being transmitted.

This can cause problems when a single factual answer to a question is not necessarily accurately reflected in the training data. In fact, The Markup said that at least one of its tests resulted in a correct answer to the same question about accepting Section 8 housing vouchers (even though “ten separate Markup employees” got a incorrect answer by repeating the same question).

The MyCity Chatbot – which is clearly labeled as a “Beta” product – tells users who bother to read the warnings that it “may occasionally produce incorrect, harmful or biased content” and that users should not “dismiss rely on his answers as a substitute for professional advice.” But the page also clearly states that it is “formed to provide you with official information on New York City business” and that it is sold as a way to “help business owners navigate government rules.

Andrew Rigie, executive director of the NYC Hospitality Alliance, told The Markup that he himself has encountered inaccuracies from the chatbot and that at least one local business owner has reported the same to him. But Leslie Brown, a spokesperson for New York City’s Office of Technology and Innovation, told Markup that the chatbot “has already provided thousands of people with accurate and timely responses” and that “we “We will continue to focus on improving this tool to better support small businesses across the city.”

Conclusion

The Markup report highlights the danger of governments and companies releasing chatbots to the public before their accuracy and reliability have been fully verified. Last month, a court forced Air Canada to honor a fraudulent refund policy invented by a chatbot available on its website. A recent Washington Post report found that chatbots built into major tax preparation software provided “random, misleading or inaccurate answers” ​​to many tax-related questions. Finally, clever engineers allegedly managed to trick car dealership chatbots into accepting a “legally binding offer – no going back” for a dollar car.

These types of problems are already prompting some companies to abandon generalized LLM-powered chatbots and turn to more specifically trained retrieval-enhanced generation models that have been tuned only to a small set of relevant information. This type of guidance could become even more important if the FTC succeeds in holding chatbots liable for “false, misleading, or disparaging” information.

Sources : presentation of the AI ​​chatbot, The CityThe Markup, Kathryn Tewson (1, 2)

And you ?

Developer Responsibility: Who should be held responsible when chatbots provide incorrect or illegal information? Developers, businesses or users?
Regulation and monitoring: How can we better regulate and monitor AI systems to avoid such errors? What measures should be put in place to ensure that chatbots provide accurate and legally compliant information?
User education: How can we make users aware of the limitations and risks of chatbots? What efforts can be made to educate users on how to verify the information provided by these automated systems?
Transparency and explanations: Should chatbots be required to provide explanations based on their responses? How can we make AI systems more transparent to users?


2024-03-31 09:18:00
#Yorks #chatbot #encourages #fellow #citizens #break #laws #commit #crimes #telling #bosses #among #portion #employees #tips #Developpez.com

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.