Li praised the skill of the competitor PPP, which has won five of the past seven CTFs at the Vegas-based convention. However, this year's event did see the competitive participation of three other South Korean teams: CGC Three, KaisHack GoN and SeoulPlusBadAss. Coming in third were the Tea Deliverers from China.Īccording to HITCON CTF leader Li Lun-zhen (李倫銓), last year's champion, South Korea-based DEKFOROOT, was notably absent, writes iThome Computer Weekly. The American team PPP (Plaid Parliament of Pwning) emerged victorious on its home soil. This year, the veteran HITCON (Hacks in Taiwan Convention) team joined forces with newcomers BFKinesiS to bag second place. This is the sixth time a Taiwanese team has participated in CTF. 9 to 11 in Las Vegas, Nevada, after three days of sleepless nights and computational problem-solving. Other dangers Wang worries about are chatbots that give out “unbelievably bad medical advice” or other misinformation that can cause serious harm.TAIPEI (Taiwan News) - A team representing Taiwan at the 27th annual DEF CON hacker convention won second place in the Capture the Flag (CTF) event, which took place Aug. You don’t want any of that information leaking to any other user.” “You can imagine somebody on one side of the world asking it some very sensitive or detailed questions, including some of their personal information. “As these foundation models become more and more widespread, it’s really critical that we do everything we can to ensure their safety,” said Scale CEO Alexandr Wang. Building the platform for the testing is another startup called Scale AI, known for its work in assigning humans to help train AI models by labeling data. Some of the details are still being negotiated, but companies that have agreed to provide their models for testing include OpenAI, Google, chipmaker Nvidia and startups Anthropic, Hugging Face and Stability AI. We’re going to be spending months after the exercise compiling a report, explaining common vulnerabilities, things that came up, patterns we saw.” “It’s not like we’re just doing this hackathon and everybody’s going home. “This is a direct pipeline to give feedback to companies,” she said. In one example, known as the “grandma exploit,” users were able to get chatbots to tell them how to make a bomb - a request a commercial chatbot would normally decline - by asking it to pretend it was a grandmother telling a bedtime story about how to make a bomb. “What happens now is kind of a scattershot approach where people find stuff, it goes viral on Twitter,” and then it may or may not get fixed if it’s egregious enough or the person calling attention to it is influential, Chowdhury said. Many others are hobbyists showing off humorous or disturbing outputs on social media until they get banned for violating a product’s terms of service. Some are official “red teams” authorized by the companies to “prompt attack” the AI models to discover their vulnerabilities. There’s already a community of users trying their best to trick chatbots and highlight their flaws. government officials in March at the South by Southwest festival in Austin, Texas, where Sven Cattell, founder of DEF CON’s long-running AI Village, and Austin Carson, president of responsible AI nonprofit SeedAI, helped lead a workshop inviting community college students to hack an AI model.Ĭarson said those conversations eventually blossomed into a proposal to test AI language models following the guidelines of the White House’s Blueprint for an AI Bill of Rights - a set of principles to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently. The idea of a mass hack caught the attention of U.S. These systems, built on what’s known as large language models, also emulate the cultural biases they’ve learned from being trained upon huge troves of what people have written online. Phillip / APĪnyone who’s tried ChatGPT, Microsoft’s Bing chatbot or Google’s Bard will have quickly learned that they have a tendency to fabricate information and confidently present it as fact. “We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed.” Rumman Chowdhury is a co-founder of Humane Intelligence, a nonprofit organization developing accountable AI systems. “This is why we need thousands of people,” said Rumman Chowdhury, lead coordinator of the mass hacking event planned for this summer’s DEF CON hacker convention in Las Vegas that’s expected to draw several thousand people. Some of the things they’ll be looking to find: How can chatbots be manipulated to cause harm? Will they share the private information we confide in them to other users? And why do they assume a doctor is a man and a nurse is a woman?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |