The IT Privacy and Security Weekly Update Hits the Launderette for the Week Ending May 21st., 20245/21/2024 Episode 190 This week we start with a tale that will bring happiness to every University Students’ mother. - click the pic to hear the podcast - We follow with another that has one woman fuming while everyone involved claims it was a coincidence. There is an update on the tattletale car story and the short, sharp, slap that lawmakers gave automakers recently. We find out the name of the company whose employee was tricked into a $25 million transfer. Then a story that will make the blood boil of anyone who’s been let go during the ongoing waves of tech layoffs. A Cyber security giant tells us why large language models can never be secure. And we end with what we would almost call an obscene invasion of privacy from a collaboration tool that we all used to trust. We won’t promise you that this weeks update will get your socks clean, but at least there’s no pre-soaking required. Come on! Let’s wash! - click the pic to hear the podcast - US: Two Students Uncover Security Bug That Could Let Millions Do Their Laundry For Free https://techcrunch.com/2024/05/17/csc-serviceworks-free-laundry-million-machines/ Two university students discovered a security flaw in over a million internet-connected laundry machines operated by CSC ServiceWorks, allowing users to avoid payment and add unlimited funds to their accounts. The students, Alexander Sherbrooke and Iakov Taranenko from UC Santa Cruz, reported the vulnerability to the company, a major laundry service provider, in January but claim it remains unpatched. Sherbrooke said he was sitting on the floor of his basement laundry room in the early hours one January morning with his laptop in hand, and "suddenly having an 'oh s-' moment." From his laptop, Sherbrooke ran a script of code with instructions telling the machine in front of him to start a cycle despite having $0 in his laundry account. The machine immediately woke up with a loud beep and flashed "PUSH START" on its display, indicating the machine was ready to wash a free load of laundry. In another case, the students added an ostensible balance of several million dollars into one of their laundry accounts, which reflected in their CSC Go mobile app as though it were an entirely normal amount of money for a student to spend on laundry. So what's the upshot for you? If this gets university students laundry cleaned... the longer it stays unpatched the better. - click the pic to hear the podcast - Global: OpenAI Says Sky Voice in ChatGPT Will Be Paused After Concerns It Sounds Too Much Like Scarlett Johansson https://www.tomsguide.com/ai/chatgpt/openai-says-sky-voice-in-chatgpt-will-be-paused-after-concerns-it-sounds-too-much-like-scarlett-johansson OpenAI is pausing the use of the popular Sky voice in ChatGPT over concerns it sounds too much like the "Her" actress Scarlett Johansson. The company says the voices in ChatGPT were from paid voice actors. A final five were selected from an initial pool of 400 and it's purely a coincidence the unnamed actress behind the Sky voice has a similar tone to Johansson. Voice is about to become more prominent for OpenAI as it begins to roll out a new GPT-4o model into ChatGPT. With it will come an entirely new conversational interface where users can talk in real-time to a natural-sounding and emotion-mimicking AI. While the Sky voice and a version of ChatGPT Voice have been around for some time, the comparison to Johansson became more obvious due to OpenAI CEO Sam Altman, and many others, drawing the similarity between the new AI model and the movie "Her". In "Her," Scarlett Johansson voices an advanced AI operating system named Samantha, who develops a romantic relationship with a lonely writer played by Joaquin Phoenix. With its ability to mimic emotional responses, the parallels from GPT-4o were obvious. So what's the upshot for you? We understand Sam Altman's reference to "Her", the fact that Scarlett Johansen starred in the movie, and that the voice selected ....well it was all unfortunate conincidence. US: Eight Automakers Grilled by US Lawmakers Over Sharing of Connected Car Data With Police https://www.autoblog.com/2024/05/15/lawmakers-call-out-eight-automakers-for-sharing-connected-vehicle-data/ Automotive News recently reported that eight automakers sent vehicle location data to police without a court order or warrant. The eight companies told senators that they provide police with data when subpoenaed, getting a rise from several officials. BMW, Kia, Mazda, Mercedes-Benz, Nissan, Subaru, Toyota, and Volkswagen presented their responses to lawmakers. Senators Ron Wyden from Oregon and Ed Markey from Massachusetts penned a letter to the Federal Trade Commission, urging investigative action. "Automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry's own voluntary privacy principles," they wrote. Ten years ago, all of those companies agreed to the Consumer Privacy Protection Principles, a voluntary code that said automakers would only provide data with a warrant or order issued by a court. Subpoenas, on the other hand, only require approval from law enforcement. So what's the upshot for you? The article notes that the lawmakers praised Honda, Ford, Tesla, and Stellantis for requiring warrants, "except in the case of emergencies or with customer consent." Global: Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns https://www.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html In January, a finance worker at Arup, a major engineering consulting firm, was tricked into a video call with deepfake impersonations of his company's CFO and staff, leading to a $25 million transfer. Initially suspicious of a phishing email from the UK office, the worker's doubts were alleviated by the convincing deepfake video call. Arup, which has 18,500 employees across 34 offices, confirmed the incident and reported it to Hong Kong police. Despite the fraud, Arup stated that their financial stability and operations were unaffected. The case highlights growing global concerns about the sophistication of deepfake technology, with Arup's leadership urging vigilance against such evolving scams. So what's the upshot for you? This incident underscores the increasing frequency and sophistication of cyberattacks, including invoice fraud, phishing, and deepfakes, prompting organizations like USAA to implement voice authentication for security. US/KP: A massive remote-work scam fooled 300 US companies into hiring North Koreans, prosecutors say https://www.msn.com/en-us/news/other/a-massive-remote-work-scam-fooled-300-us-companies-into-hiring-north-koreans-prosecutors-say/ar-BB1myT1E An Arizona woman is accused of aiding North Koreans in securing US remote-work jobs. They gained employment at Fortune 500 companies, including a TV network and a Silicon Valley firm. An Arizona woman has been accused of aiding North Koreans in securing remote-work jobs in the US and funneling their wages back to North Korea, which is subject to US sanctions, according to federal prosecutors. The US Attorney's Office for the District of Columbia announced in a press release on Thursday that Christina Marie Chapman, 49, was arrested on Wednesday and charged with nine counts, including conspiracy to defraud the US. According to prosecutors, the scheme began sometime in 2020 and used the stolen identities of about 60 US citizens. They said it impacted more than 300 companies and generated more than $6.8 million in revenue, which was sent back to North Korea. In the charging document, prosecutors allege that Chapman facilitated overseas IT workers posing as Americans using the stolen or borrowed identities of US citizens. So what's the upshot for you? How to make enemies and lose friends. While the US becomes littered with thousands of job cuts. Take real work and farm it out to North Korea so they can insert back doors into code, use the income to supply arms to Russia to destroy Ukraine and buiid a nuclear program to destroy what's left of the rest of the world. Global: Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities https://www.schneier.com/blog/archives/2024/05/llms-data-control-path-insecurity.html Bruce Schneier highlights a critical vulnerability in large language models (LLMs), likening it to a flaw in 1970s phone systems exploited by John Draper. Both systems mix data with control commands, making them susceptible to manipulation. Schneier discusses how LLMs can be compromised through malicious instructions in training data, hidden commands in web content, or interactions with untrusted users. While specific attacks can be mitigated once identified, the fundamental issue remains due to the inherent design of LLMs where data influences the system's operations. This feature, while powerful, makes LLMs particularly vulnerable to prompt injection attacks. Schneier suggests that current defenses are piecemeal and advocates for better input sanitation and access controls. So what's the upshot for you? Ultimately, Schneier calls for a careful evaluation of LLMs' use, balancing their benefits against the security risks, and warns that until a way to separate data and control paths is discovered, LLMs will remain vulnerable in adversarial environments like the Internet. Global: User Outcry As Slack Scrapes Customer Data For AI Model Training https://www.securityweek.com/user-outcry-as-slack-scrapes-customer-data-for-ai-model-training/ Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models. By default, and without requiring users to opt-in, Slack said its systems have been analyzing customer data and usage information (including messages, content and files) to build AI/ML models to improve the software. The company insists it has technical controls in place to block Slack from accessing the underlying content and promises that data will not lead across workplaces but, despite these assurances, corporate Slack admins are scrambling to opt-out of the data scraping. This line in Slack's communication sparked a social media controversy with the realization that content in direct messages and other sensitive content posted to Slack was being used to develop AI/ML models and that opting out world require sending e-mail requests: "If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at [email protected] with your workspace/org URL and the subject line 'Slack global model opt-out request'. We will process your request and respond once the opt-out has been completed." So what's the upshot for you? We are stunned by this news. Slack, and owner Salesforce, must know this is a great way to kill customer confidence. So to recap: Our first story provided insight as to why University graduations smelled so fresh this year. It wasn’t what we expected either, after all, what respectable University student hacks a washing machine and then finds out months later the hack still works? Then we found out that Scarlett Johansson was none too pleased to learn that Open AI’s new voice sounded just like hers. 400 actors provided voice samples, and it was sheer coincidence they picked the voice they picked. For now, that Open AI voice has gone mute, but the controversy hasn’t hurt Open AI or Scarlett. We got a list of carmakers who don’t share your data with police without a court order or warrant, and we’d just like to share them with you again: Honda, Ford, Tesla and Stellantis. We dropped GM from the list as they were busy selling your data to Lexus Nexus. It turned out that the company behind the deepfake AI $25 million transfer was Arup. They are anxious that we don’t fall into the same trap. Deepfakes are getting really good, so a second method of confirmation when you are being asked to send money or gift cards somewhere is a good idea. Ask that your company put that in place before you need to use it. Next we had an Arizona woman farming tech jobs out to North Korea through a collection of stolen identities. We think that she cant be punished enough. Not only was her activity funding a psychotic regime, but there’s a good chance that all the code written by the North Koreans is full of backdoors. Then Bruce Schneier took us back to the 1970s to explain why large language models can never be secure. You just cant mix data in with commands! And we end with Slack using our chats, personal communications and attachments to train it’s own AI model, oh and by the way if you want them to stop doing that the request has to come from your Slack administrator in an email to the customer experience team with this specific phrase in the subject line 'Slack global model opt-out request'. Any more hoops for us to jump through? If Salesforce’ (the new owner of Slack) price drops 17% after this update, you’ll know why. Our quote of the week – "You know it's time to do the laundry when you dry off with a sneaker." – Zach Galifianakis. That's it for this week. Stay safe, stay secure, remember to separate out the delicates, and we'll see you in se7en! Leave a Reply. |