• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle
  • https://www.bleepingcomputer.com/news/security/genetics-firm-23andme-says-user-data-stolen-in-credential-stuffing-attack/

    The information that has been exposed from this incident includes full names, usernames, profile photos, sex, date of birth, genetic ancestry results, and geographical location.

    The threat actor accessed a small number of 23andMe accounts and then scraped the data of their DNA Relative matches, which shows how opting into a feature can have unexpected privacy consequences.

    • Usernames Profile Photos DoB

    They can be linked to other online accounts. This allows for phishing, potentially scamming or getting additonal information on them which can lead to more sophisticated/personalised scams. Older, less tech savvy users are better targets for scammers.

    • Username Sex DoB Genetic Ancestry Location data

    Data aggregators can sell this info to Health Insurance Companies or any other system who can then discriminate based on genes sex age or location

    • All of this information

    Can contribute to people committing fraud with their information if they collect enough information from different sources.

    • DNA relatives

    Having enough information about a user to use it to target their now known relatives in personalised scams.

    The people that did this probably didn’t know what information they were going to get, maybe they were hoping for payment info, and settled for trying to just sell what they got.

    Any information, no matter how useless it might seem, is better than no information and enough useless information in the wrong hands can be very valuable.

    Theres countless data breaches every year and people will collect it all and link different accounts from different breaches until they have enough information. Most people use the same email address for every website and a lot of people reuse the same passwords, which is how this data leak occurred. Knowing that these users reuse the same email/password combination here means theres a very good chance they’ve reused it elsewhere.

    You can check out what data breeches have occured and if your email or password has been posted in any of these dumps here https://haveibeenpwned.com/

    Once the information is out there, its out there for good and what might seem trivial now to you could be valuable tomorrow to someone else





  • AI regulations is definitely needed, selfregulations never works, look at how Google and Meta have been operating and even now with GDPR in place they’re still getting away with abusing users data with no consequences.

    OpenAI did not tell us what good regulation should look like,” the person said.

    What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office.

    I should hope OpnAI didn’t tell them how to regulate OpenAI and I really hope this isn’t the only regulation that we see since technology is constantly advancing we’re going to need to constantly update regulation to keep companies like OpenAI from getting out of control like Google.

    OpenAI argued that, for example, the ability of an AI system to draft job descriptions should not be considered a “high risk” use case, nor the use of an AI in an educational setting to draft exam questions for human curation. After OpenAI shared these concerns last September, an exemption was added to the Act

    This bothers me, job descriptions are already ridiculous with over the top requirements for jobs that don’t require them, feeding these prompts into AI is only going to make that worse.

    With regards to drafting exams, does it not start to make these exams redundant if the experts on the material being examined can’t even come up with questions and problems, then why should students even bother engaging with the material when they could just use AI because of this loose regulation.

    Researchers have demonstrated that ChatGPT can, with the right coaxing, be vulnerable to a type of exploit known as a jailbreak, where specific prompts can cause it to bypass its safety filters and comply with instructions to, for example, write phishing emails or return recipes for dangerous substances.

    Unfortunately since this regulation isn’t global and there are so many open source models that can run on consumer hardware there is no real way to regulate jailbreaking prompts and this is always going to be an issue. On the other hand though, these open source low power models are needed to give users more options and privacy, this is where we went wrong with search engines and operating systems.