ER Tech Pros uses Google Analytics to understand how users interact with our website, helping us improve your experience.
Data collected (e.g., pages visited, time spent, location) is anonymized and stored securely, with no personal information shared.
Learn more via Google’s Privacy Policy . To opt out, adjust your browser settings or use the Google Analytics Opt-out Add-on .
By clicking “Accept,” you consent to this data collection.
Synthetic media technology has been around for a long time. The act of faking content is nothing new, dating back over a thousand years when Greek inventors designed machines that were capable of writing text, creating sounds, and playing music. These machines, however, weren’t capable of generating original content.
It was the rise of artificial intelligence (AI) that actually took synthetic media to a whole new level. Because of the innovations in the field of artificial intelligence and machine learning (ML), technology has given birth to the most prominent form of synthetic media: Deepfake.
Deepfake refers to synthetic media that involves using artificial intelligence to replace the person in an image or video with someone else’s likeness in a way that makes the video look authentic.
It’s easier to remember this way:
A form of AI called deep learning is used to create realistic-looking fake media.
Hence the portmanteau: deepfake.
With the ever-increasing power of ML and AI technologies, one can now manipulate or generate images, videos, and even audio files with a high potential to deceive.
This AI-powered website, for example, churns out deceivingly realistic portraits of people who do not exist. Refresh the page to see a new (fake) person.
This viral
TikTok account dedicates itself to creating and posting short videos of a deepfake Tom Cruise. It’s definitely entertaining...in a creepy, unsettling way.
This 2020 video of a speech by a deepfake Queen Elizabeth aptly delivers a warning about how technology is enabling the proliferation of misinformation and fake news in our highly digital age.
Inasmuch as we’d like to celebrate the advancement of ML and AI technologies, the US Federal Bureau of Investigation (FBI) encourages private organizations to be careful and vigilant of falling victim to entities who use the deepfake technology for malicious campaigns.
In a Private Industry Notification (PIN) issued by its Cyber Division in March 2021, the FBI anticipates that malicious actors “almost certainly will leverage synthetic content for cyber and foreign influence operations.”
This isn’t really anything new. Since 2019, several campaigns have been found to use ML-generated social media profile images. However, the recent advances in ML and AI give cybercriminals the opportunity to generate and manipulate content that serves their malicious plans.
Healthcare practices, large and small, have long been prime targets for cybercrime. This is because you handle especially sensitive data that could be worth a lot of money for cybercriminals.
If hackers are working hard to improve the quality and increase the impact of their campaigns, you can be sure that they’re doing this with healthcare practices like yours as their target. Whether it’s tricking your staff into letting them in your network or launching external attacks against your cybersecurity defenses, hackers will definitely use technological advancements to their advantage.
According to their PIN, the FBI anticipates that deepfake technology will be employed broadly across their cyber operations. The sophistication of the synthetic media will take their existing spearphishing and social engineering campaigns to a different, more potent, level.
Besides using the technology on existing campaigns, cybercriminals are also anticipated to employ deepfake tools in a newly defined cyber attack vector called Business Identity Compromise (BIC).
BIC is the creation of synthetic corporate personas or the imitation of existing employees, likely to gain access to your company’s bank account, line of credit, tax refund, or personnel information.
As you may have observed from the examples above, some deepfakes are so realistic that they’re almost impossible to detect.
Here are a few tips from the FBI on how you can identify and mitigate synthetic media such as deepfakes:
Visual indicators include distortions, warping, and inconsistencies. In photos, check if the spacing and placement of the person’s eyes are distinct and consistent among several images.
In videos, check for inconsistencies in the movement of the person’s head and torso. You can also check if the movement of their face and/or lips are consistent with the audio.
If identifying synthetic content is too difficult or too confusing for you, a third-party research and forensic organization can help evaluate the media. A trusted IT and cybersecurity company can also offer valuable insight and advice.
An example would be SIFT methodology, which encourages us to carry out the following steps when taking in information online:
Deepfakes may be getting more and more realistic, but you can still protect your practice from getting fooled by cybercriminals who use them. Good cyber hygiene can significantly lower your risks of falling victim to malicious actors.
Here are a few FBI-endorsed security measures that your practice can adopt:
Inform and educate your entire workforce about the risks of deepfakes, making sure to include those in upper management and senior executive positions. Because they have access to the most sensitive and most valuable information in your organization, they are prime targets for cybercrime.
Conduct cybersecurity awareness training among your staff so that they know how to spot, avoid, and report social engineering, phishing, and other cyberattack attempts. You can partner with trusted experts who specialize in healthcare technology and have them share their valuable insight and advice to your team.
Do not assume an online persona is legitimate. Seek multiple independent sources of information to validate or verify it. If you receive attachments, links, or emails from senders you don’t recognize, do not open them.
Never provide personal information in response to unsolicited inquiries. This includes usernames, passwords, birth dates, social security numbers, financial data, and other sensitive information.
If you receive requests for sensitive or corporate information, be cautious about sending them over electronically or over the phone. If you can, verify these requests via secondary channels of communication.
Use multi-factor authentication (MFA) on all systems to add an extra layer of security to your network. According to Microsoft, MFA can block 99.9% of account compromise attacks.
| Read more about MFA and how it can help your practice here...
Establish and implement processes that allow your practice to continue operations in the event one of your accounts is compromised and used to spread synthetic content.
All these tips and advice from the FBI are extremely helpful to healthcare practices everywhere. Unfortunately, dealing with threats involving synthetic media can be highly technical and you may not be able to handle everything yourself.
For
maximum protection and round-the-clock support, partner with a trusted IT company that specializes in protecting and optimizing healthcare practices.
Search Articles
ER Tech Pros is a managed service provider (MSP) that specializes in catering to the IT needs of businesses across the globe. We have offices in Sacramento and the Greater Fresno area.
We use our cutting-edge technology, extensive experience, and global team of technology experts to ensure your IT network is in its most secure and optimal state.
We focus on your IT so you can focus on growing your company.
8795 Folsom Blvd, Ste 205
Sacramento, CA 95826
1501 Howard Rd, Ste 2
Madera, CA 93637
(855) ER-TECH-1 / (855) 378-3241
info@ertech.io
Resources
Search this Site
ERTech Pros | All Rights Reserved.