Who is a Network Specialist?
If you are looking for the write up of my CISSP Experience. Please click here (PDF)
ISC2 OFFICIAL CISSP Study Guide
Boson and Official ISC2 Practice Tests.
English dictionary. What would I do without you.
ITPROTV Accelerated Course ( and some Long Versions)
Destination certification Mind Maps (YouTube).
Computerphile Videos on various topics (YouTube).
Kelly Handerhan’s cissp tips for exam (YouTube)
Last but not the least, join a study group.
There are plenty, there is one that I recommend. Certification Station. certificationstation.org
And if you are looking for free. Discussions and sessions, join me every Saturday exclusively on
Certification Station on Discord.
If you came here looking for old notes and videos, go to https://icsbits.com/cissp-study/
Good luck. If you are looking for the write up of my CISSP Experience. Please click here (PDF)
Facebook and its sister properties Instagram and WhatsApp are suffering from ongoing, global outages. We don’t yet know why this happened, but the how is clear: Earlier this morning, something inside Facebook caused the company to revoke key digital records that tell computers and other Internet-enabled devices how to find these destinations online.
Doug Madory is director of internet analysis at Kentik, a San Francisco-based network monitoring company. Madory said at approximately 11:39 a.m. ET today (15:39 UTC), someone at Facebook caused an update to be made to the company’s Border Gateway Protocol (BGP) records. BGP is a mechanism by which Internet service providers of the world share information about which providers are responsible for routing Internet traffic to which specific groups of Internet addresses.
In simpler terms, sometime this morning Facebook took away the map telling the world’s computers how to find its various online properties. As a result, when one types Facebook.com into a web browser, the browser has no idea where to find Facebook.com, and so returns an error page.
In addition to stranding billions of users, the Facebook outage also has stranded its employees from communicating with one another using their internal Facebook tools. That’s because Facebook’s email and tools are all managed in house and via the same domains that are now stranded.
“Not only are Facebook’s services and apps down for the public, its internal tools and communications platforms, including Workplace, are out as well,” New York Times tech reporter Ryan Mac tweeted. “No one can do any work. Several people I’ve talked to said this is the equivalent of a ‘snow day’ at the company.”
The mass outage comes just hours after CBS’s 60 Minutes aired a much-anticipated interview with Frances Haugen, the Facebook whistleblower who recently leaked a number of internal Facebook investigations showing the company knew its products were causing mass harm, and that it prioritized profits over taking bolder steps to curtail abuse on its platform — including disinformation and hate speech.
We don’t know how or why the outages persist at Facebook and its other properties, but the changes had to have come from inside the company, as Facebook manages those records internally. Whether the changes were made maliciously or by accident is anyone’s guess at this point.
Madory said it could be that someone at Facebook just screwed up.
“In the past year or so, we’ve seen a lot of these big outages where they had some sort of update to their global network configuration that went awry,” Madory said. “We obviously can’t rule out someone hacking them, but they also could have done this to themselves.”
In the meantime, several different domain registration companies listed the domain Facebook.com as up for sale. There’s no reason to believe this domain will actually be sold as a result, but it’s fun to consider how many billions of dollars it could fetch on the open market.
This is a developing story and will likely be updated throughout the day.
The U.S. Federal Communications Commission (FCC) is asking for feedback on new proposed rules to crack down on SIM swapping and number port-out fraud, increasingly prevalent scams in which identity thieves hijack a target’s mobile phone number and use that to wrest control over the victim’s online identity.
In a long-overdue notice issued Sept. 30, the FCC said it plans to move quickly on requiring the mobile companies to adopt more secure methods of authenticating customers before redirecting their phone number to a new device or carrier.
“We have received numerous complaints from consumers who have suffered significant distress, inconvenience, and financial harm as a result of SIM swapping and port-out fraud,” the FCC wrote. “Because of the serious harms associated with SIM swap fraud, we believe that a speedy implementation is appropriate.”
The FCC said the proposal was in response to a flood of complaints to the agency and the U.S. Federal Trade Commission (FTC) about fraudulent SIM swapping and number port-out fraud. SIM swapping happens when the fraudsters trick or bribe an employee at a mobile phone store into transferring control of a target’s phone number to a device they control.
From there, the attackers can reset the password for almost any online account tied to that mobile number, because most online services still allow people to reset their passwords simply by clicking a link sent via SMS to the phone number on file.
Scammers commit number port-out fraud by posing as the target and requesting that their number be transferred to a different mobile provider (and to a device the attackers control).
The FCC said the carriers have traditionally sought to address both forms of phone number fraud by requiring static data about the customer that is no longer secret and has been exposed in a variety of places already — such as date of birth and Social Security number. By way of example, the commission pointed to the recent breach at T-Mobile that exposed this data on 40 million current, past and prospective customers.
What’s more, victims of SIM swapping and number port-out fraud are often the last to know about their victimization. The FCC said it plans to prohibit wireless carriers from allowing a SIM swap unless the carrier uses a secure method of authenticating its customer. Specifically, the commission proposes that carriers be required to verify a “pre-established password” with customers before making any changes to their accounts.
According to the FCC, several examples of pre-established passwords include:
-a one-time passcode sent via text message to the account phone number or a pre-registered backup number
-a one-time passcode sent via email to the email address associated with the account
-a passcode sent using a voice call to the account phone number or pre-registered back-up telephone number.
The commission said it was also considering updating its rules to require wireless carriers to develop procedures for responding to failed authentication attempts and to notify customers immediately of any requests for SIM changes.
Additionally, the FCC said it may impose additional customer service, training, and transparency requirements for the carriers, noting that too many customer service personnel at the wireless carriers lack training on how to assist customers who’ve had their phone numbers stolen.
The FCC said some of the consumer complaints it has received “describe wireless carrier customer service representatives and store employees who do not know how to address instances of fraudulent SIM swaps or port-outs, resulting in customers spending many hours on the phone and at retail stores trying to get resolution. Other consumers complain that their wireless carriers have refused to provide them with documentation related to the fraudulent SIM swaps, making it difficult for them to pursue claims with their financial institutions or law enforcement.”
“Several consumer complaints filed with the Commission allege that the wireless carrier’s store employees are involved in the fraud, or that carriers completed SIM swaps despite the customer having previously set a PIN or password on the account,” the commission continued.
Allison Nixon, an expert on SIM swapping attacks chief research officer with New York City-based cyber intelligence firm Unit221B, said any new authentication requirements will have to balance the legitimate use cases for customers requesting a new SIM card when their device is lost or stolen. A SIM card is the small, removable smart card that associates a mobile device to its carrier and phone number.
“Ultimately, any sort of static defense is only going to work in the short term,” Nixon said. “The use of SMS as a 2nd factor in itself is a static defense. And the criminals adapted and made the problem actually worse than the original problem it was designed to solve. The long term solution is that the system needs to be responsive to novel fraud schemes and adapt to it faster than the speed of legislation.”
Eager to weigh in on the FCC’s proposal? They want to hear from you. The electronic comment filing system is here, and the docket number for this proceeding is WC Docket No. 21-341.
In February, KrebsOnSecurity wrote about a novel cybercrime service that helped attackers intercept the one-time passwords (OTPs) that many websites require as a second authentication factor in addition to passwords. That service quickly went offline, but new research reveals a number of competitors have since launched bot-based services that make it relatively easy for crooks to phish OTPs from targets.
Many websites now require users to supply both a password and a numeric code/OTP token sent via text message, or one generated by mobile apps like Authy and Google Authenticator. The idea is that even if the user’s password gets stolen, the attacker still can’t access the user’s account without that second factor — i.e. without access to the victim’s mobile device or phone number.
The OTP interception service featured earlier this year — Otp[.]agency — advertised a web-based bot designed to trick targets into giving up OTP tokens. The customer would enter a target’s phone number and name, and OTP Agency would initiate an automated phone call that alerts that person about unauthorized activity on their account.
The call would prompt the target to enter an OTP token generated by their phone’s mobile app (“for authentication purposes”), and that code would then get relayed back to the bad guy customers’ panel at the OTP Agency website.
OTP Agency took itself offline within hours of that story. But according to research from cyber intelligence firm Intel 471, multiple new OTP interception services have emerged to fill that void. And all of them operate via Telegram, a cloud-based instant messaging system.
“Intel 471 has seen an uptick in services on the cybercrime underground that allow attackers to intercept one-time password (OTP) tokens,” the company wrote in a blog post today. “Over the past few months, we’ve seen actors provide access to services that call victims, appear as a legitimate call from a specific bank and deceive victims into typing an OTP or other verification code into a mobile phone in order to capture and deliver the codes to the operator. Some services also target other popular social media platforms or financial services, providing email phishing and SIM swapping capabilities.”
Intel471 says one new Telegram OTP bot called “SMSRanger” is popular because it’s remarkably easy to use, and probably because of the many testimonials posted by customers who seem happy with its frequent rate of success in extracting OTP tokens when the attacker already has the target’s “fullz,” personal information such as Social Security number and date of birth. From their analysis:
“Those who pay for access can use the bot by entering commands similar to how bots are used on popular workforce collaboration tool Slack. A simple slash command allows a user to enable various ‘modes’ — scripts aimed as various services — that can target specific banks, as well as PayPal, Apple Pay, Google Pay, or a wireless carrier.
Once a target’s phone number has been entered, the bot does the rest of the work, ultimately granting access to whatever account has been targeted. Users claim that SMSRanger has an efficacy rate of about 80% if the victim answered the call and the full information (fullz) the user provided was accurate and updated.”
Another OTP interception service called SMS Buster requires a tad more effort from a customer, Intel 471 explains:
“The bot provides options to disguise a call to make it appear as a legitimate contact from a specific bank while letting the attackers choose to dial from any phone number. From there, an attacker could follow a script to trick a victim into providing sensitive details such as an ATM personal identification number (PIN), card verification value (CVV) and OTP, which could then be sent to an individual’s Telegram account. The bot, which was used by attackers targeting Canadian victims, gives users the chance to launch attacks in French and English.”
These services are springing up because they work and they’re profitable. And they’re profitable because far too many websites and services funnel users toward multi-factor authentication methods that can be intercepted, spoofed, or misdirected — like SMS-based one-time codes, or even app-generated OTP tokens.
The idea behind true “two-factor authentication” is that the user is required to present two out of three of the following: Something they have (mobile devices); something they know (passwords); or something they are (biometrics). For example, you present your credentials to a website, and the site prompts you to approve the login via a prompt that pops up on your registered mobile device. That is true two-factor authentication: Something you have, and something you know (and maybe also even something you are).
In addition, these so-called “push notification” methods include important time-based contexts that add security: They happen directly after the user submits their credentials; and the opportunity to approve the push notification expires after a short period.
But in so many instances, what sites request is basically two things you know (a password and a one-time code) to be submitted through the same channel (a web browser). This is usually still better than no multi-factor authentication at all, but as these services show there are now plenty of options of circumventing this protection.
I hope these OTP interception services make clear that you should never provide any information in response to an unsolicited phone call. It doesn’t matter who claims to be calling: If you didn’t initiate the contact, hang up. Don’t put them on hold while you call your bank; the scammers can get around that, too. Just hang up. Then you can call your bank or whoever else you need.
Unfortunately, those most likely to fall for these OTP interception schemes are people who are less experienced with technology. If you’re the resident or family IT geek and have the ability to update or improve the multi-factor authentication profiles for your less tech-savvy friends and loved ones, that would be a fabulous way to show you care — and to help them head off a potential disaster at the hands of one of these bot services.
When was the last time you reviewed your multi-factor settings and options at the various websites entrusted with your most precious personal and financial information? It might be worth paying a visit to 2fa.directory (formerly twofactorauth[.]org) for a checkup.
The new $30 Airtag tracking device from Apple has a feature that allows anyone who finds one of these tiny location beacons to scan it with a mobile phone and discover its owner’s phone number if the Airtag has been set to lost mode. But according to new research, this same feature can be abused to redirect the Good Samaritan to an iCloud phishing page — or to any other malicious website.
The Airtag’s “Lost Mode” lets users alert Apple when an Airtag is missing. Setting it to Lost Mode generates a unique URL at https://found.apple.com, and allows the user to enter a personal message and contact phone number. Anyone who finds the Airtag and scans it with an Apple or Android phone will immediately see that unique Apple URL with the owner’s message.
When scanned, an Airtag in Lost Mode will present a short message asking the finder to call the owner at at their specified phone number. This information pops up without asking the finder to log in or provide any personal information. But your average Good Samaritan might not know this.
That’s important because Apple’s Lost Mode doesn’t currently stop users from injecting arbitrary computer code into its phone number field — such as code that causes the Good Samaritan’s device to visit a phony Apple iCloud login page.
The vulnerability was discovered and reported to Apple by Bobby Rauch, a security consultant and penetration tester based in Boston. Rauch told KrebsOnSecurity the Airtag weakness makes the devices cheap and possibly very effective physical trojan horses.
“I can’t remember another instance where these sort of small consumer-grade tracking devices at a low cost like this could be weaponized,” Rauch said.
Consider the scenario where an attacker drops a malware-laden USB flash drive in the parking lot of a company he wants to hack into. Odds are that sooner or later some employee is going to pick that sucker up and plug it into a computer — just to see what’s on it (the drive might even be labeled something tantalizing, like “Employee Salaries”).
If this sounds like a script from a James Bond movie, you’re not far off the mark. A USB stick with malware is very likely how U.S. and Israeli cyber hackers got the infamous Stuxnet worm into the internal, air-gapped network that powered Iran’s nuclear enrichment facilities a decade ago. In 2008, a cyber attack described at the time as “the worst breach of U.S. military computers in history” was traced back to a USB flash drive left in the parking lot of a U.S. Department of Defense facility.
In the modern telling of this caper, a weaponized Airtag tracking device could be used to redirect the Good Samaritan to a phishing page, or to a website that tries to foist malicious software onto her device.
Rauch contacted Apple about the bug on June 20, but for three months when he inquired about it the company would say only that it was still investigating. A few hours after Apple was contacted by KrebsOnSecurity last Friday, the company sent Rauch a follow-up email stating they planned to address the weakness in an upcoming update, and in the meantime would he mind not talking about it publicly?
Rauch said Apple never acknowledged basic questions he asked about the bug, such as if they had a timeline for fixing it, and if so whether they planned to credit him in the accompanying security advisory. Or whether his submission would qualify for Apple’s “bug bounty” program, which promises financial rewards of up to $1 million for security researchers who report security bugs in Apple products.
Rauch said he’s reported many software vulnerabilities to other vendors over the years, and that Apple’s lack of communication prompted him to go public with his findings — even though Apple says staying quiet about a bug until it is fixed is how researchers qualify for recognition in security advisories.
“I told them, ‘I’m willing to work with you if you can provide some details of when you plan on remediating this, and whether there would be any recognition or bug bounty payout’,” Rauch said, noting that he told Apple he planned to publish his findings within 90 days of notifying them. “Their response was basically, ‘We’d appreciate it if you didn’t leak this.’”
Rauch’s experience echoes that of other researchers interviewed in a recent Washington Post article about how not fun it can be to report security vulnerabilities to Apple, a notoriously secretive company. The common complaints were that Apple is slow to fix bugs and doesn’t always pay or publicly recognize hackers for their reports, and that researchers often receive little or no feedback from the company.
The risk, of course, is that some researchers may decide it’s less of a hassle to sell their exploits to vulnerability brokers, or on the darknet — both of which often pay far more than bug bounty awards.
There’s also a risk that frustrated researchers will simply post their findings online for everyone to see and exploit — regardless of whether the vendor has released a patch. Earlier this week, a security researcher who goes by the handle “illusionofchaos” released writeups on three zero-day vulnerabilities in Apple’s iOS mobile operating system — apparently out of frustration over trying to work with Apple’s bug bounty program.
Ars Technica reports that on July 19 Apple fixed a bug that llusionofchaos reported on April 29, but that Apple neglected to credit him in its security advisory.
“Frustration with this failure of Apple to live up to its own promises led illusionofchaos to first threaten, then publicly drop this week’s three zero-days,” wrote Jim Salter for Ars. “In illusionofchaos’ own words: ‘Ten days ago I asked for an explanation and warned then that I would make my research public if I don’t receive an explanation. My request was ignored so I’m doing what I said I would.’”
Rauch said he realizes the Airtag bug he found probably isn’t the most pressing security or privacy issue Apple is grappling with at the moment. But he said neither is it difficult to fix this particular flaw, which requires additional restrictions on data that Airtag users can enter into the Lost Mode’s phone number settings.
“It’s a pretty easy thing to fix,” he said. “Having said that, I imagine they probably want to also figure out how this was missed in the first place.”
Apple has not responded to requests for comment.
Some of the most successful and lucrative online scams employ a “low-and-slow” approach — avoiding detection or interference from researchers and law enforcement agencies by stealing small bits of cash from many people over an extended period. Here’s the story of a cybercrime group that compromises up to 100,000 email inboxes per day, and apparently does little else with this access except siphon gift card and customer loyalty program data that can be resold online.
The data in this story come from a trusted source in the security industry who has visibility into a network of hacked machines that fraudsters in just about every corner of the Internet are using to anonymize their malicious Web traffic. For the past three years, the source — we’ll call him “Bill” to preserve his requested anonymity — has been watching one group of threat actors that is mass-testing millions of usernames and passwords against the world’s major email providers each day.
Bill said he’s not sure where the passwords are coming from, but he assumes they are tied to various databases for compromised websites that get posted to password cracking and hacking forums on a regular basis. Bill said this criminal group averages between five and ten million email authentication attempts daily, and comes away with anywhere from 50,000 to 100,000 of working inbox credentials.
In about half the cases the credentials are being checked via “IMAP,” which is an email standard used by email software clients like Mozilla’s Thunderbird and Microsoft Outlook. With his visibility into the proxy network, Bill can see whether or not an authentication attempt succeeds based on the network response from the email provider (e.g. mail server responds “OK” = successful access).
You might think that whoever is behind such a sprawling crime machine would use their access to blast out spam, or conduct targeted phishing attacks against each victim’s contacts. But based on interactions that Bill has had with several large email providers so far, this crime gang merely uses custom, automated scripts that periodically log in and search each inbox for digital items of value that can easily be resold.
And they seem particularly focused on stealing gift card data.
“Sometimes they’ll log in as much as two to three times a week for months at a time,” Bill said. “These guys are looking for low-hanging fruit — basically cash in your inbox. Whether it’s related to hotel or airline rewards or just Amazon gift cards, after they successfully log in to the account their scripts start pilfering inboxes looking for things that could be of value.”
According to Bill, the fraudsters aren’t downloading all of their victims’ emails: That would quickly add up to a monstrous amount of data. Rather, they’re using automated systems to log in to each inbox and search for a variety of domains and other terms related to companies that maintain loyalty and points programs, and/or issue gift cards and handle their fulfillment.
Why go after hotel or airline rewards? Because these accounts can all be cleaned out and deposited onto a gift card number that can be resold quickly online for 80 percent of its value.
“These guys want that hard digital asset — the cash that is sitting there in your inbox,” Bill said. “You literally just pull cash out of peoples’ inboxes, and then you have all these secondary markets where you can sell this stuff.”
Bill’s data also shows that this gang is so aggressively going after gift card data that it will routinely seek new gift card benefits on behalf of victims, when that option is available. For example, many companies now offer employees a “wellness benefit” if they can demonstrate they’re keeping up with some kind of healthy new habit, such as daily gym visits, yoga, or quitting smoking.
Bill said these crooks have figured out a way to tap into those benefits as well.
“A number of health insurance companies have wellness programs to encourage employees to exercise more, where if you sign up and pledge to 30 push-ups a day for the next few months or something you’ll get five wellness points towards a $10 Starbucks gift card, which requires 1000 wellness points,” Bill explained. “They’re actually automating the process of replying saying you completed this activity so they can bump up your point balance and get your gift card.”
The Gift Card Gang’s Footprint
How do the compromised email credentials break down in terms of ISPs and email providers? There are victims on nearly all major email networks, but Bill said several large Internet service providers (ISPs) in Germany and France are heavily represented in the compromised email account data.
“With some of these international email providers we’re seeing something like 25,000 to 50,000 email accounts a day get hacked,” Bill said. “I don’t know why they’re getting popped so heavily.”
That may sound like a lot of hacked inboxes, but Bill said some of the bigger ISPs represented in his data have tens or hundreds of millions of customers.
Measuring which ISPs and email providers have the biggest numbers of compromised customers is not so simple in many cases, nor is identifying companies with employees whose email accounts have been hacked.
This kind of mapping is often more difficult than it used to be because so many organizations have now outsourced their email to cloud services like Gmail and Microsoft Office365 — where users can access their email, files and chat records all in one place.
“It’s a little complicated with Office 365 because it’s one thing to say okay how many Hotmail connections are you seeing per day in all this credential-stuffing activity, and you can see the testing against Hotmail’s site,” Bill said. “But with the IMAP traffic we’re looking at, the usernames being logged into are any of the million or so domains hosted on Office365, many of which will tell you very little about the victim organization itself.”
On top of that, it’s also difficult to know how much activity you’re not seeing.
Looking at the small set of Internet address blocks he knows are associated with Microsoft 365 email infrastructure, Bill examined the IMAP traffic flowing from this group to those blocks. Bill said that in the first week of April 2021, he identified 15,000 compromised Office365 accounts being accessed by this group, spread over 6,500 different organizations that use Office365.
“So I’m seeing this traffic to just like 10 net blocks tied to Microsoft, which means I’m only looking at maybe 25 percent of Microsoft’s infrastructure,” Bill explained. “And with our puny visibility into probably less than one percent of overall password stuffing traffic aimed at Microsoft, we’re seeing 600 Office accounts being breached a day. So if I’m only seeing one percent, that means we’re likely talking about tens of thousands of Office365 accounts compromised daily worldwide.”
In a December 2020 blog post about how Microsoft is moving away from passwords to more robust authentication approaches, the software giant said an average of one in every 250 corporate accounts is compromised each month. As of last year, Microsoft had nearly 240 million active users, according to this analysis.
“To me, this is an important story because for years people have been like, yeah we know email isn’t very secure, but this generic statement doesn’t have any teeth to it,” Bill said. “I don’t feel like anyone has been able to call attention to the numbers that show why email is so insecure.”
Bill says that in general companies have a great many more tools available for securing and analyzing employee email traffic when that access is funneled through a Web page or VPN, versus when that access happens via IMAP.
“It’s just more difficult to get through the Web interface because on a website you have a plethora of advanced authentication controls at your fingertips, including things like device fingerprinting, scanning for http header anomalies, and so on,” Bill said. “But what are the detection signatures you have available for detecting malicious logins via IMAP?”
Microsoft declined to comment specifically on Bill’s research, but said customers can block the overwhelming majority of account takeover efforts by enabling multi-factor authentication.
“For context, our research indicates that multi-factor authentication prevents more than 99.9% of account compromises,” reads a statement from Microsoft. “Moreover, for enterprise customers, innovations like Security Defaults, which disables basic authentication and requires users to enroll a second factor, have already significantly decreased the proportion of compromised accounts. In addition, for consumer accounts, adding a second authentication factor is required on all accounts.”
A Mess That’s Likely to Stay That Way
Bill said he’s frustrated by having such visibility into this credential testing botnet while being unable to do much about it. He’s shared his data with some of the bigger ISPs in Europe, but says months later he’s still seeing those same inboxes being accessed by the gift card gang.
The problem, Bill says, is that many large ISPs lack any sort of baseline knowledge of or useful data about customers who access their email via IMAP. That is, they lack any sort of instrumentation to be able to tell the difference between legitimate and suspicious logins for their customers who read their messages using an email client.
“My guess is in a lot of cases the IMAP servers by default aren’t logging every search request, so [the ISP] can’t go back and see this happening,” Bill said.
Confounding the challenge, there isn’t much of an upside for ISPs interested in voluntarily monitoring their IMAP traffic for hacked accounts.
“Let’s say you’re an ISP that does have the instrumentation to find this activity and you’ve just identified 10,000 of your customers who are hacked. But you also know they are accessing their email exclusively through an email client. What do you do? You can’t flag their account for a password reset, because there’s no mechanism in the email client to affect a password change.”
Which means those 10,000 customers are then going to start receiving error messages whenever they try to access their email.
“Those customers are likely going to get super pissed off and call up the ISP mad as hell,” Bill said. “And that customer service person is then going to have to spend a bunch of time explaining how to use the webmail service. As a result, very few ISPs are going to do anything about this.”
Indictators of Compromise (IoCs)
It’s not often KrebsOnSecurity has occasion to publish so-called “indicators of compromise” (IoC)s, but hopefully some ISPs may find the information here useful. This group automates the searching of inboxes for specific domains and trademarks associated with gift card activity and other accounts with stored electronic value, such as rewards points and mileage programs.
This file includes the top inbox search terms used in a single 24 hour period by the gift card gang. The numbers on the left in the spreadsheet represent the number of times during that 24 hour period where the gift card gang ran a search for that term in a compromised inbox.
Some of the search terms are focused on specific brands — such as Amazon gift cards or Hilton Honors points; others are for major gift card networks like CashStar, which issues cards that are white-labeled by dozens of brands like Target and Nordstrom. Inboxes hacked by this gang will likely be searched on many of these terms over the span of just a few days.
Over the past 15 years, a cybercrime anonymity service known as VIP72 has enabled countless fraudsters to mask their true location online by routing their traffic through millions of malware-infected systems. But roughly two weeks ago, VIP72’s online storefront — which ironically enough has remained at the same U.S.-based Internet address for more than a decade — simply vanished.
Like other anonymity networks marketed largely on cybercrime forums online, VIP72 routes its customers’ traffic through computers that have been hacked and seeded with malicious software. Using services like VIP72, customers can select network nodes in virtually any country, and relay their traffic while hiding behind some unwitting victim’s Internet address.
The domain Vip72[.]org was originally registered in 2006 to “Corpse,” the handle adopted by a Russian-speaking hacker who gained infamy several years prior for creating and selling an extremely sophisticated online banking trojan called A311 Death, a.k.a. “Haxdoor,” and “Nuclear Grabber.” Haxdoor was way ahead of its time in many respects, and it was used in multiple million-dollar cyberheists long before multi million-dollar cyberheists became daily front page news.
Between 2003 and 2006, Corpse focused on selling and supporting his Haxdoor malware. Emerging in 2006, VIP72 was clearly one of his side hustles that turned into a reliable moneymaker for many years to come. And it stands to reason that VIP72 was launched with the help of systems already infected with Corpse’s trojan malware.
The first mention of VIP72 in the cybercrime underground came in 2006 when someone using the handle “Revive” advertised the service on Exploit, a Russian language hacking forum. Revive established a sales presence for VIP72 on multiple other forums, and the contact details and messages shared privately by that user with other forum members show Corpse and Revive are one and the same.
When asked in 2006 whether the software that powered VIP72 was based on his Corpse software, Revive replied that “it works on the new Corpse software, specially written for our service.”
One denizen of a Russian language crime forum who complained about the unexplained closure of VIP72 last month said they noticed a change in the site’s domain name infrastructure just prior to the service’s disappearance. But that claim could not be verified, as there simply are no signs that any of that infrastructure changed prior to VIP72’s demise.
In fact, until mid-August VIP72’s main home page and supporting infrastructure had remained at the same U.S.-based Internet address for more than a decade — a remarkable achievement for such a high-profile cybercrime service.
Cybercrime forums in multiple languages are littered with tutorials about how to use VIP72 to hide one’s location while engaging in financial fraud. From examining some of those tutorials, it is clear that VIP72 is quite popular among cybercriminals who engage in “credential stuffing” — taking lists of usernames and passwords stolen from one site and testing how many of those credentials work at other sites.
Corpse/Revive also long operated an extremely popular service called check2ip[.]com, which promised customers the ability to quickly tell whether a given Internet address is flagged by any security companies as malicious or spammy.
Hosted on the same Internet address as VIP72 for the past decade until mid-August 2021, Check2IP also advertised the ability to let customers detect “DNS leaks,” instances where configuration errors can expose the true Internet address of hidden cybercrime infrastructure and services online.
Check2IP is so popular that it has become a verbal shorthand for basic due diligence in certain cybercrime communities. Also, Check2IP has been incorporated into a variety of cybercrime services online — but especially those involved in mass-mailing malicious and phishous email messages.
It remains unclear what happened to VIP72; users report that the anonymity network is still functioning even though the service’s website has been gone for two weeks. That makes sense since the infected systems that get resold through VIP72 are still infected and will happily continue to forward traffic so long as they remain infected. Perhaps the domain was seized in a law enforcement operation.
But it could be that the service simply decided to stop accepting new customers because it had trouble competing with an influx of newer, more sophisticated criminal proxy services, as well as with the rise of “bulletproof” residential proxy networks. For most of its existence until recently, VIP72 normally had several hundred thousand compromised systems available for rent. By the time its website vanished last month — that number had dwindled to fewer than 25,000 systems globally.
On Thursday evening, KrebsOnSecurity was the subject of a rather massive (and mercifully brief) distributed denial-of-service (DDoS) attack. The assault came from “Meris,” the same new botnet behind record-shattering attacks against Russian search giant Yandex this week and internet infrastructure firm Cloudflare earlier this summer.
Cloudflare recently wrote about its attack, which clocked in at 17.2 million bogus requests-per-second. To put that in perspective, Cloudflare serves over 25 million HTTP requests per second on average.
In its Aug. 19 writeup, Cloudflare neglected to assign a name to the botnet behind the attack. But on Thursday DDoS protection firm Qrator Labs identified the culprit — “Meris” — a new monster that first emerged at the end of June 2021.
Qrator says Meris has launched even bigger attacks since: A titanic and ongoing DDoS that hit Russian Internet search giant Yandex last week is estimated to have been launched by roughly 250,000 malware-infected devices globally, sending 21.8 million bogus requests-per-second.
While last night’s Meris attack on this site was far smaller than the recent Cloudflare DDoS, it was far larger than the Mirai DDoS attack in 2016 that held KrebsOnSecurity offline for nearly four days. The traffic deluge from Thursday’s attack on this site was more than four times what Mirai threw at this site five years ago. This latest attack involved more than two million requests-per-second. By comparison, the 2016 Mirai DDoS generated approximately 450,000 requests-per-second.
According to Qrator, which is working with Yandex on combating the attack, Meris appears to be made up of Internet routers produced by MikroTik. Qrator says the United States is home to the most number of MikroTik routers that are potentially vulnerable to compromise by Meris — with more than 42 percent of the world’s MikroTik systems connected to the Internet (followed by China — 18.9 percent– and a long tail of one- and two-percent countries).
It’s not immediately clear which security vulnerabilities led to these estimated 250,000 MikroTik routers getting hacked by Meris.
“The spectrum of RouterOS versions we see across this botnet varies from years old to recent,” the company wrote. “The largest share belongs to the version of firmware previous to the current stable one.”
It’s fitting that Meris would rear its head on the five-year anniversary of the emergence of Mirai, an Internet of Things (IoT) botnet strain that was engineered to out-compete all other IoT botnet strains at the time. Mirai was extremely successful at crowding out this competition, and quickly grew to infect tens of thousands of IoT devices made by dozens of manufacturers.
And then its co-authors decided to leak the Mirai source code, which led to the proliferation of dozens of Mirai variants, many of which continue to operate today.
The biggest contributor to the IoT botnet problem — a plethora of companies white-labeling IoT devices that were never designed with security in mind and are often shipped to the customer in default-insecure states — hasn’t changed much, mainly because these devices tend to be far cheaper than more secure alternatives.
The good news is that over the past five years, large Internet infrastructure companies like Akamai, Cloudflare and Google (which protects this site with its Project Shield initiative) have heavily invested in ramping up their ability to withstand these outsized attacks [full disclosure: Akamai is an advertiser on this site].
More importantly, the Internet community at large has gotten better at putting their heads together to fight DDoS attacks, by disrupting the infrastructure abused by these enormous IoT botnets, said Richard Clayton, director of Cambridge University’s Cybercrime Centre.
“It would be fair to say we’re currently concerned about a couple of botnets which are larger than we have seen for some time,” Clayton said. “But equally, you never know they may peter out. There are a lot of people who spend their time trying to make sure these things are hard to keep stable. So there are people out there defending us all.”