Threat Lab Archives - Integral Ad Science https://integralads.com/insider/category/topics/threat-lab/ The Hidden Cost of MFA Webinar Tue, 28 Oct 2025 21:03:48 +0000 en hourly 1 https://wordpress.org/?v=6.8.3 https://integralads.com/wp-content/uploads/2023/06/IAS-Favicon-2023-Square.png Threat Lab Archives - Integral Ad Science https://integralads.com/insider/category/topics/threat-lab/ 32 32 Inside “Arcade,” the Gaming Web That Plays Itself https://integralads.com/insider/inside-arcade-the-gaming-web-that-plays-itself/ Wed, 29 Oct 2025 12:00:00 +0000 https://integralads.com/?p=344125 The IAS Threat Lab's latest discovery, Arcade, reveals a growing monetization pattern within the open web’s gaming ecosystem. Read more here.

The post Inside “Arcade,” the Gaming Web That Plays Itself appeared first on Integral Ad Science.

]]>

Executive Summary

The IAS Threat Lab is a dedicated team within Integral Ad Science (IAS) focused on uncovering, dissecting, and mitigating sophisticated ad-fraud schemes and malicious digital-advertising behavior. The team’s latest discovery, Arcade, reveals a growing monetization pattern within the open web’s gaming ecosystem. A large cluster of HTML5 gaming domains, all active and fully functional, are monetizing their ad supply through hidden in-app browser activity sourced from fraudulent Android applications. These gaming domains receive real ad requests and deliver playable content, but the traffic itself never comes from visible users. Instead, it originates from background-rendered browser tabs embedded inside Android utilities and lightweight gaming apps.

The IAS Threat Lab identified 50 Android apps with a combined 10 million installs, collectively driving traffic to a network of more than 200 HTML5 gaming domains. The HTML5 games are legitimate and responsive, yet typically unseen by human users. IAS’s fraud detection systems identified unusual background rendering and domain iteration behavior, enabling early intervention before the campaign reached full scale.

An Invisible Arcade of Real Games

The domains at the heart of this scheme are deceptively authentic. Each hosts playable, browser-based games, complete with interactive interfaces and advertising frameworks. The domain names frequently contain gaming-related keywords such as “game” or “play” and appear innocuous to verification systems because the pages load correctly and function as intended. When traffic is reviewed on the surface, everything points to legitimate gaming engagement.

However, behind this facade, fraudulent Android apps continuously load these pages in invisible in-app browser tabs, generating a constant stream of ad impressions. The pages are not user-facing, and the activity occurs silently while the app performs unrelated tasks. One analyzed app cycled through hundreds of domains in sequence, creating persistent, automated inventory that appeared indistinguishable from genuine gameplay sessions.

Arcade Threat 1

How the Scheme Activates

Arcade’s activation framework builds on the same cloaking principles first disclosed by IAS in Mirage. In that earlier operation, apps concealed their ad fraud logic until certain install conditions were met, allowing them to appear legitimate in standard testing environments. Arcade applies this same approach.

When installed directly from app stores, Arcade-linked apps behave normally and show no signs of suspicious behavior. The ad fraud components only activate when the app identifies that it has been installed through a paid ad campaign or referral flow.

This determination is made using attribution SDKs (Appsflyer SDK), which reports the method of installation to the app. If the install is confirmed to be campaign-driven, the app communicates with a remote command-and-control server, transmitting device and referral data in custom headers. When these headers meet strict validation checks, the server responds with an encrypted payload, which the app decrypts and loads dynamically. The decrypted code enables hidden in-app browser tabs to render HTML5 gaming domains in the background and, in many cases, activates out-of-context ad delivery as a secondary monetization path

Because this payload is delivered only to targeted devices, the apps remain clean under normal review conditions. Many samples also include anti-analysis safeguards designed to detect virtualized or sandboxed environments and suspend execution, further complicating detection efforts.

Monetization at Two Levels

Once activated, apps under the Arcade cluster monetize through two distinct yet complementary mechanisms. The first is hidden traffic generation, which uses the network of gaming domains as a monetization endpoint. These domains serve as the true beneficiaries, selling inventory created by invisible sessions within the apps. The second is out-of-context advertising, a recurring behavior seen in earlier IAS investigations such as Vapor and Mirage, where apps run unexpected full-screen or interstitial ads appearing outside normal engagement flows.

This dual structure allows threat actors to extract revenue from both visible and invisible ad surfaces. While the visible ads frustrate users, the invisible gaming sessions serve as the far greater financial engine.

Distribution and Geographic Shifts

Arcade’s early activity was concentrated in Western markets, primarily the United States, Brazil, and Canada. Over time, the campaigns have migrated toward Asia-Pacific regions. By September 2025, installs and traffic were dominated by Turkey, Vietnam, the Philippines, Thailand, Indonesia, and Malaysia. These countries now comprise nearly half of all detected Arcade traffic, indicating a deliberate redirection of campaign targeting.

Among the identified apps, Street King Vacano (com.txt.streetking.vacano) a lightweight gaming app exemplifies Arcade’s scaling model. The app achieved top chart positions, including #1 Top New Free, in several markets and reached over 1 million installs in less than one month.

Detection and Outlook

Arcade is anything but subtle. The volume of traffic attributed to this operation points to a well-resourced and coordinated effort, capable of producing and maintaining a myriad of Android apps and gaming domains at industrial scale. The threat actors behind Arcade have invested heavily in development infrastructure, domain acquisition, and continuous app deployment, allowing them to sustain large amounts of fraudulent traffic that blend into the broader gaming ecosystem. 

IAS fraud detection systems identified the operation through a combination of behavioral anomaly analysis and domain traversal pattern recognition. Repeated, rapid page loads and non-interactive rendering events exposed the activity as non-human.

The ongoing investigation continues to map the infrastructure of developer accounts and associated domain operators behind the scheme. While the current set includes 200 gaming domains and 50 apps, the modular nature of Arcade’s framework suggests that new domains can easily be added as others are blocked.

Conclusion

Arcade demonstrates how legitimate web content can be reappropriated into a hidden monetization layer. The gaming domains themselves are central to this operation, earning revenue from traffic that no human ever generates. Android apps function as the traffic engine, quietly delivering ad requests that fund the ecosystem behind them.

By detecting and removing these apps and domains from the advertising supply chain, IAS effectively cuts off the operation’s financial lifeline, preventing further monetization and neutralizing the resources that fuel its continued growth.

IAS Threat Lab remains committed to uncovering and disrupting evolving monetization schemes before they reach advertisers’ budgets. Learn more about our AI-powered approach to combatting fraud pre- and post-bid.

The post Inside “Arcade,” the Gaming Web That Plays Itself appeared first on Integral Ad Science.

]]>
IAS Threat Lab Uncovers Sophisticated Fraud Scheme Targeting Android Devices https://integralads.com/insider/threat-lab-uncovers-sophisticated-fraud-scheme-targeting-android-devices/ Wed, 23 Jul 2025 12:00:00 +0000 https://integralads.com/?p=343317 What happens when utility apps turn into full-screen ad machines? The IAS Threat Lab has uncovered a global ad fraud scheme — codenamed “Mirage” — designed to mislead users, inflate installs, and exploit the mobile advertising ecosystem.

The post IAS Threat Lab Uncovers Sophisticated Fraud Scheme Targeting Android Devices appeared first on Integral Ad Science.

]]>

What happens when utility apps turn into full-screen ad machines? The IAS Threat Lab has uncovered a global ad fraud scheme — codenamed “Mirage” — designed to mislead users, inflate installs, and exploit the mobile advertising ecosystem. 

The Scheme

The IAS Threat Lab has uncovered a large-scale, fast-evolving ad fraud operation known as Mirage — a sprawling network of fraudulent Android apps designed to hijack user devices and exploit ad ecosystems at massive scale.

Mirage apps masquerade as helpful utilities like phone cleaners and battery boosters. On the surface, they appear harmless. But behind the scenes, they use cloaking techniques and bot-driven installs to quietly switch on aggressive ad fraud behavior once installed through specific referral links. The result: full-screen interstitial ads that interrupt users out of context, with no real utility provided by the apps themselves.

Threat Lab researchers have identified over 300 Mirage-linked app IDs, collectively amassing more than 70 million downloads and generating over 350 million daily bid requests. These apps were built with one goal in mind: monetize unsuspecting users through persistent, non-consensual advertising.

Mirage marks a troubling evolution of tactics first seen in the Threat Lab-discovered Vapor scheme. Unlike Vapor apps, which gradually stripped away features over time, Mirage apps are designed to mislead from the outset — launching with fake installs to climb app store rankings and turning on monetization only when real users start downloading.

The Takedown

IAS collaborated directly with Google to take swift action against the Mirage operation. Based on Threat Lab intelligence, all identified Mirage apps have been removed from the Google Play Store. Google Play Protect is actively alerting users and will disable these apps automatically — even when installed outside of the Play Store ecosystem.

IAS continues to monitor Mirage’s activity, as its operators rapidly adapt through reskinned apps, recycled developer accounts, and global distribution across North America, Europe, and Asia-Pacific.

Want to understand Mirage’s full scale and how IAS unraveled the scheme? The full report includes:

  • A breakdown of Mirage’s app behavior and install patterns
  • Tactics used to cloak fraud from store reviewers
  • Case study of a Mirage app that reached #1 in the U.S.
  • Timeline of IAS and Google’s coordinated takedown

The post IAS Threat Lab Uncovers Sophisticated Fraud Scheme Targeting Android Devices appeared first on Integral Ad Science.

]]>
How IAS is Fighting Back Against the Shape-Shifting Kaleidoscope Scheme https://integralads.com/insider/ias-threat-lab-fraud-scheme-kaleidoscope/ Fri, 09 May 2025 12:00:00 +0000 https://integralads.com/?p=341759 Ad fraud is evolving — and so are we. The IAS Threat Lab has uncovered a sophisticated new threat dubbed Kaleidoscope — a deceptive Android ad fraud operation that’s as dynamic as it is dangerous. This scheme hides behind seemingly...

The post How IAS is Fighting Back Against the Shape-Shifting Kaleidoscope Scheme appeared first on Integral Ad Science.

]]>

Ad fraud is evolving — and so are we.

The IAS Threat Lab has uncovered a sophisticated new threat dubbed Kaleidoscope — a deceptive Android ad fraud operation that’s as dynamic as it is dangerous. This scheme hides behind seemingly legitimate apps available on Google Play, while malicious lookalike versions are quietly distributed through third-party app stores.

What makes Kaleidoscope so dangerous?

Like its namesake, Kaleidoscope is constantly shifting, transforming its structure to evade detection and prolong its fraudulent activity. The scheme’s complexity lies in:

  • App cloning with a twist: Two versions of the same app — one clean, one malicious — sharing a single app ID. The clean version gets distributed via official app stores, while the malicious twin hides in third-party app stores, flooding the ecosystem with fake impressions.
  • Rebranded SDKs: Following exposure of the CaramelAds SDK in earlier schemes like Konfety, fraudsters have pivoted — stripping out identifiers and repackaging malicious code in new, harder-to-detect SDKs.
  • Concealed infrastructure: A web of new domains powers communication between infected devices and command-and-control servers, allowing bad actors to coordinate large-scale fraud in real time.
  • Continued expansion: IAS has uncovered over 130 app IDs, including 40 newly uncovered apps, associated with Kaleidoscope, driving an estimated 2.5 million fraudulent installs per month.

A new chapter in mobile ad fraud

Kaleidoscope is not an isolated incident — it’s a blueprint for how bad actors are adapting in the wake of increased security measures. IAS’s Threat Lab has conducted deep forensic analysis of both previously known and newly uncovered apps to trace the evolution of this fraud model.

It’s a dangerous shift from simple out-of-context ad abuse to something far more dynamic — and scalable.

IAS is staying ahead of the fraud

IAS customers are already protected. Our fraud pre-bid avoidance solution, available within leading DSPs, leverages real-time machine learning models trained to identify and avoid threats like Kaleidoscope before a single bid is placed.

IAS blocks impressions tied to these malicious app IDs and domains at the source — so your ad dollars don’t fund fraud.

Download the full report to learn more.

The post How IAS is Fighting Back Against the Shape-Shifting Kaleidoscope Scheme appeared first on Integral Ad Science.

]]>
IAS Threat Lab Uncovers Extensive Fraud Scheme Leveraging Fake Android Apps https://integralads.com/insider/ias-threat-lab-fraud-scheme-fake-android-apps/ Wed, 05 Mar 2025 12:58:12 +0000 https://integralads.com/?p=340047 The IAS Threat Lab has uncovered an extensive and sophisticated ad fraud scheme, codenamed Vapor, that leverages fake Android apps to deploy endless, intrusive full-screen interstitial video ads.

The post IAS Threat Lab Uncovers Extensive Fraud Scheme Leveraging Fake Android Apps appeared first on Integral Ad Science.

]]>

The Scheme

The IAS Threat Lab has uncovered an extensive and sophisticated ad fraud scheme, codenamed Vapor, that leverages fake Android apps to deploy endless, intrusive full-screen interstitial video ads. Vapor exploits unsuspecting users and ad networks on a massive scale, representing a highly organized and pervasive ad fraud scheme.

Threat Lab has identified over 180 app IDs since early 2024 as part of the Vapor scheme, collectively amassing over 56 million downloads and generating over 200 million bid requests daily, with no real functionality delivered to users.

The Takedown

The IAS Threat Lab has actively worked to disrupt this fraudulent operation, collaborating with industry partners to minimize its impact. As a result of our findings, Google has removed all identified apps from the Play Store. Google Play Protect will warn users and automatically disable these apps, even when they originate from sources outside of Google Play. 

We continue to monitor the Vapor operation as threat actors adapt their tactics and as new apps are added to the scheme.

Download the full report to access comprehensive insights on the Vapor scheme, including app design and timeline of events.

 IAS partners are safeguarded against the impact of the Vapor threat through our fraud pre-bid avoidance solution available within their DSPs. Our advanced machine learning models power our fraud segments to ensure DSPs do not bid on impressions that originate from these apps. Explore our ad fraud solutions to learn more.

The post IAS Threat Lab Uncovers Extensive Fraud Scheme Leveraging Fake Android Apps appeared first on Integral Ad Science.

]]>
IAS’s Commitment to Privacy in Google’s Privacy Sandbox https://integralads.com/insider/commitment-privacy-google-privacy-sandbox/ Thu, 21 Nov 2024 14:00:00 +0000 https://integralads.com/?p=338525 For the past two years, IAS has been working closely with Google and the IAB Tech Lab on the development and testing of Google’s Privacy Sandbox. Much progress has been made to ensure that critical programmatic advertising use cases —...

The post IAS’s Commitment to Privacy in Google’s Privacy Sandbox appeared first on Integral Ad Science.

]]>

For the past two years, IAS has been working closely with Google and the IAB Tech Lab on the development and testing of Google’s Privacy Sandbox. Much progress has been made to ensure that critical programmatic advertising use cases — particularly third-party measurement and optimization — continue to be supported in a Privacy Sandbox environment. However, outstanding items exist and continued collaboration between Google and the broader ad tech community is needed.

What is Privacy Sandbox?

Privacy Sandbox is an umbrella term for a set of proposed privacy-related updates to Google’s Chrome browser and Android operating system. According to Google:

The Privacy Sandbox initiative aims to create technologies that both protect people’s privacy online and give companies and developers tools to build thriving digital businesses.

The Privacy Sandbox has two core aims:

  • Provide alternative solutions for browsing without third-party cookies.
  • Reduce cross-site and cross-app tracking while helping to keep online content and services free for all.

These updates fundamentally change how ads are targeted, purchased, served, and measured for Chrome users.

IAS’s Contributions to Privacy Sandbox

IAS is an active member of the IAB Tech Lab’s Privacy Sandbox Task Force, focusing on third-party measurement and optimization use cases. Our analyses were captured in the group’s Privacy Sandbox assessment published in February 2024. 

In addition to our contributions to the IAB Tech Lab’s Task Force, we are partnering directly with Google’s Privacy Sandbox team to discuss third-party measurement requirements and brainstorm solutions to ensure measurement partners have continued access to the critical signals needed to detect fraud and brand safety risks and protect buyers.

IAS’s Privacy Sandbox Testing & Findings

We started testing by setting up a local development environment where we could test and understand the flow of the Protected Audience API end-to-end. This helped us to understand how our tags were rendered, limitations in Fenced Frames, and how we might need to receive data (like top level URL) from partners in the future.

  • The Protected Audience API enables on-device auctions by the browser to choose relevant ads from websites the user has previously visited.
  • A Fenced Frame is a privacy-enhanced version of traditional iframes.

Fully understanding the flow of data inside the Protected Audience auction has helped us work with our DSP partners to help advise them on changes they might need to make to ensure IAS can continue to provide fraud and brand safety protection in both pre-bid and post-bid.

A challenge we identified while setting up testing with our DSP partners is ensuring that we continue to receive creative macros.

Today, we utilize data being passed to us by DSPs through creative macros at ad render time to enrich our understanding of the ad serving environment and parties involved in the ad auction.  These signals are critical components for us to inform and protect our customers and they have come to rely on this useful context.

Based on the current Privacy Sandbox design, it is no longer up to the DSP to provide these macros during creative render time, but rather the SSP to replace these macros on the winning ad’s “renderUrl,” which is the endpoint used to render an ad creative.

Coordination is now needed between DSPs and SSPs to ensure the SSP provides these macros in the renderUrl they provide to DSPs. We believe a macro standard is needed to ensure this data is consistently exchanged between SSP and DSP. We are working with Google and the IAB Tech Lab to develop this macro standard.

IAS’s Commitment to Privacy

IAS is a strong advocate of consumer privacy. Our platform is cookie-less, and we have strict data retention and access policies for personal data such as IP addresses. We continue to monitor privacy-driven changes and will adapt accordingly.

Supporting Privacy Sandbox is a strategic imperative for IAS. We have been pleased with the collaboration and partnership we have with the Privacy Sandbox team. We continue to work together to ensure third-party measurement and optimization use cases are supported in a Privacy Sandbox environment.

The post IAS’s Commitment to Privacy in Google’s Privacy Sandbox appeared first on Integral Ad Science.

]]>
The Threat Lab Analyzes the Year’s Highest Ad Fraud Spikes https://integralads.com/insider/threat-lab-analyzes-ad-fraud/ Wed, 28 Feb 2024 12:00:00 +0000 https://integralads.com/?p=329489 IAS's Threat Lab explores the reasons behind the highest ad fraud spikes in the year.

The post The Threat Lab Analyzes the Year’s Highest Ad Fraud Spikes appeared first on Integral Ad Science.

]]>

In the Threat Lab’s recent Fraud Trends Report, the team advised advertisers and publishers to be wary of ad fraud when it’s at its peak. In the Q4 2023 analysis, the Threat Lab defined October, November, and December as the holiday season — and predicted that these months are particularly dangerous from an ad fraud perspective. The team further anticipated that November, specifically its fourth week, would be the hardest hit by ad fraud.

Now that we can look at global data from the 2023 holiday season, the Threat Lab team wanted to analyze the accuracy of advising caution during this time and understand what this can teach marketers about protecting ad campaigns during particularly turbulent periods.

When is fraud at its highest?

The graph below maps out global ad fraud rates for each day in 2023. The period highlighted in dark green represents the dates between October 12 and December 4, which overlaps with our definition of the holiday season. With a median ad fraud rate 67% greater than the rest of the year, the holiday season stands out as a period of disproportionately high ad fraud activity — so much so that 2023’s highest ad fraud rates all fall within the highlighted period.

The graph below maps out global ad fraud rates for each day in 2023

What’s the deal with Black Friday?

However, certain dates pop more than others. The graph below shows that ad fraud peaks during November 24, 25, and 26 — also known as 2023’s Black Friday weekend. The median ad fraud of these three days was 36% greater than other dates in the holiday season, and a whopping 122% greater than the rest of the year. The top three most hostile days of 2023 were the Saturday after Black Friday, the Sunday after Black Friday, and Black Friday itself. 

The graph below shows that ad fraud peaks during November 24, 25, and 26 — also known as 2023’s Black Friday weekend.

What makes this trend even more peculiar is that November 27, which was the Monday after Black Friday, experienced a 28% decrease in ad fraud relative to the day prior — signaling the highest single day drop of the entire year. This drop suggests that the already disproportionately high holiday ad fraud climaxing on this specific weekend isn’t accidental.

How does fraudulent traffic compare to legitimate traffic?

To understand why the holiday season is so hostile, we compared fraudulent traffic with legitimate traffic.

Although legitimate traffic does increase gradually throughout the year with periodic dips, there aren’t major inconsistencies during the holiday season. Fraudulent traffic, on the other hand, increases dramatically. This comparison suggests that the spike in ad fraud during the holiday season isn’t due to a change in legitimate traffic behavior — it’s linked to fraudulent traffic becoming more active.

So why does this happen? It’s most likely that advertisers aggressively increase their ad spend in the weeks leading up to Black Friday. And once Black Friday ends, they use the remainder of their budget on the following weekend. The challenge with this is that there is a finite supply of high quality inventory available. The increased demand pressures advertisers to buy low quality inventory, exposing their campaigns to higher risk of ad fraud.

How IAS Can Help

When high quality inventory is limited, the risk of fraud infiltrating campaigns is much higher. This risk can be avoided by adopting ad fraud mitigation and protection, which helps keep advertisers from spending their ad dollars on low quality inventory that comes with a higher likelihood of ad fraud.

IAS ensures the most precise fraud detection possible with our three-pillar approach. Our fraud technology is based on a set of methodologies that detect the evolving threat of ad fraud with incredible accuracy. Our approach includes:

  • Behavioral and Network analysis: our 10+ billion daily impressions provide a macro view of bot activity
  • Browser and Device analysis: real-time signals at the ad call
  • Targeted Reconnaissance: malware analysis, software disassembly, and the infiltration of hacker communities guides detection and identification of emerging threats

It’s never too late — or too early — to protect your campaigns from fraud. Be prepared for fraudsters whose activity changes with the seasons. Contact an IAS representative today to find out how to fight fraud and drive superior results.

The post The Threat Lab Analyzes the Year’s Highest Ad Fraud Spikes appeared first on Integral Ad Science.

]]>
Fraud in Generative AI: A deep dive into how Gen AI affects marketers https://integralads.com/insider/fraud-generative-ai-marketers/ Thu, 25 Jan 2024 13:00:00 +0000 https://integralads.com/?p=328930 The IAS Threat Lab evaluated the effects of generative AI on ad fraud, how it could speed up the prominence of ad fraud in the digital advertising industry, and how we continue to protect marketers from these emerging threats.

The post Fraud in Generative AI: A deep dive into how Gen AI affects marketers appeared first on Integral Ad Science.

]]>

The IAS Threat Lab evaluates the impact of generative AI in digital advertising

Generative AI has been commanding conversations in digital media lately. While generative AI isn’t a new concept, it has recently caught fire due to the prevalence and effectiveness of chatbot AI platforms, like ChatGPT.  The onslaught of this technology has moved several tech giants to create their own version of an AI chatbot — and has inspired malware authors and fraudsters to do the same. 

The IAS Threat Lab evaluated the effects of generative AI on ad fraud, how it could speed up the prominence of ad fraud in the digital advertising industry, and how we continue to protect marketers from these emerging threats.

What is generative AI?

Generative AI is artificial intelligence that’s able to create text, images, audio, video, or other media. This technology works by learning the patterns of information or data that it ingests, and generating new, similar information.

Let’s take a look at some ways marketers could be fooled by fraud powered by generative AI.

Fake websites and falsified user agent data

Generative AI can create realistic-looking websites filled with fake content, including articles, reviews, product listings, and more. It’s also possible to have AI ingest legitimate content from outside sources and have it launder the content into seemingly original articles and news stories. These sites can then be used to host fraudulent ads and generate fake impressions. 

It doesn’t stop there. Fraudsters can leverage generative AI to create websites that closely mimic legitimate publishers, fooling marketers into thinking they’re placing their ads on reputable platforms. While not strictly fraudulent, this tactic can also be leveraged in Made-For-Advertising (MFA) sites in which a website with seemingly legitimate content overruns the majority of viewable space with advertisements.

MFA sites can be created at scale by an individual with the help of generative AI. Due to the positive viewability and brand safety metrics of placing ads on these domains, these sites can fool marketers into believing they’re generating quality impressions — but in reality, these ad placements are low quality and a waste of ad spend. In fact, the Association of National Advertisers (ANA) reports that MFA websites represent a shocking 21% of impressions.

Generative AI can also falsify impressions by creating fake user agent strings, making it appear as if impressions are coming from legitimate devices and browsers. Fraudsters use AI models trained on vast datasets of real user agent data to generate plausible but entirely fake user agent strings. The data is then inserted into requests made by automated bots or scripts and allows for fraud at scale with the intention to bypass behavioral fraud detection.

Fraudulent user profiles and testimonials

Generative AI can create highly detailed user profiles. These fake profiles are often complete with demographic information, interests, and online behaviors with the intention of mimicking genuine user interactivity. Fake profiles often come hand-in-hand with AI-driven bots that can simulate user behavior, like mouse movements and ad interactions, to make fraudulent activity look natural. 

Along with fake profiles, generative AI can be used to produce large volumes of positive reviews and testimonials for products or services, artificially boosting popularity and trustworthiness. This type of activity is seen consistently on major retail domains, video streaming platforms, and financial coverage websites. They tend to be relatively obvious by having overly detailed information in their comment or review, and having an outlandish amount of “likes” that have been generated by bots.

How can you prevent AI-based ad fraud?

IAS’s ad fraud detection tools can help marketers mitigate the impacts of AI-based fraud. With advanced analytical, behavioral, and deterministic modeling techniques, IAS can detect and stop fraudulent activity powered by automation and AI.

In addition to fraud detection products, marketers should verify the authenticity of website content and reviews to identify potential fraud. Plus, marketers should conduct thorough vetting of publishers to ensure they’re legitimate, along with regular audits of ad campaigns, websites, and user engagement patterns to identify and address suspicious activity.

Marketers can also leverage IAS’s AI-driven MFA detection and avoidance product. Our MFA site technology improves transparency into advertiser campaign quality, identifies where spend is being allocated, and informs optimizations to minimize waste on MFA sites so marketers can take back control of their media quality and cut down on wasted spend.

Don’t let your brand get caught up in the blurry lines of what’s real and what’s not on the internet. Contact an IAS representative today to find out how to fight fraud and drive superior results. 

The post Fraud in Generative AI: A deep dive into how Gen AI affects marketers appeared first on Integral Ad Science.

]]>
How Does Bot Traffic Vary Over Time? https://integralads.com/insider/bot-traffic-over-time/ Mon, 18 Dec 2023 05:00:00 +0000 https://integralads.com/?p=328234 It’s no secret that the digital advertising landscape is rampant with malicious bots designed to commit ad fraud. Safeguarding any campaign from fraud is essential — but the threat of fraud is especially present during the holiday season.

The post How Does Bot Traffic Vary Over Time? appeared first on Integral Ad Science.

]]>

The IAS Threat Lab deep dives into ad fraud trends

It’s no secret that the digital advertising landscape is rampant with malicious bots designed to commit ad fraud. Safeguarding any campaign from fraud is essential — but the threat of fraud is especially present during the holiday season. With ad budgets at their highest for the year, fraudsters are prepared to take advantage and ramp up their operations.

In this report, the IAS Threat Lab investigates bot activity to determine trends over time. The team set out to answer: How does bot traffic vary in a 24-hour period? How does it vary over a week? A month? The Threat Lab observed bot traffic over a 391-day period, analyzing billions of impressions each day to understand how bot attacks vary and how predictable — or unpredictable — these attacks may be.

Here’s a sneak peek at what we found.

People sleep, bots don’t

The Threat Lab analyzed billions of impressions across both human traffic and bot traffic. The impressions showed that bots mirror the activity patterns of the humans they are trying to impersonate: both bots and people spend the largest chunk of traffic between 8 a.m. and 4 p.m.

However, human traffic is sporadic, and bot traffic is quite consistent. We noticed this pattern in late-night activity — human traffic drops dramatically during the hours of 10 p.m. and 3 a.m., while bot traffic showed less variation. Conversely, during the early morning hours of 4 a.m. to 9 a.m., human traffic picks up much faster than bot traffic.

Fraudsters take their weekends off

Weekdays tend to have higher bot traffic volume than the weekends. On average, weekday bot traffic tends to be 21.2% greater than bot traffic on the weekends. For humans, weekday traffic is only 6.9% greater than weekend traffic.

But why is weekday traffic so much higher for bots? At the end of the day, although bots are largely automated, they still require oversight from their human operators. These individuals, like any employee at a corporation, most likely work on the weekdays and take their weekends off, causing a dip in bot traffic.

Look out for the holiday season

From an ad fraud perspective, holiday seasons are a particularly dangerous period for marketers and publishers. Bot traffic is at its highest during the holidays, with November seeing a higher volume than any other month of the year. Bot traffic in November is 22% greater than the bot traffic in October, and 57% greater than bot traffic in January. 

Even within November, bot traffic grows as the month progresses, peaking in the fourth week. It then continues to stay high throughout December, making marketers particularly vulnerable to ad fraud and warranting extra protection against malicious bots during the holiday season.

Bottom line

Ad campaigns can be seriously impacted by timing — no matter the time of year. It’s crucial to ensure your brand’s ads are shown to real people, in the right place, and are free from fraud during this holiday season and as we enter the new year.

Download the IAS Fraud Trends Report now for more details, and click here to learn more about IAS’s ad fraud product suite.

The post How Does Bot Traffic Vary Over Time? appeared first on Integral Ad Science.

]]>
The State of Fraud in Private Marketplaces https://integralads.com/insider/the-state-of-fraud-in-private-marketplaces/ Thu, 31 Aug 2023 23:59:00 +0000 https://integralads.com/?p=312266 The IAS Threat Lab evaluates fraud in PMP transactions Marketers widely believe private marketplaces (PMP) create an environment that is resistant, or even immune, to ad fraud. Given the basis for creating a PMP relationship between trustworthy advertisers and publishers,...

The post The State of Fraud in Private Marketplaces appeared first on Integral Ad Science.

]]>

The IAS Threat Lab evaluates fraud in PMP transactions

Marketers widely believe private marketplaces (PMP) create an environment that is resistant, or even immune, to ad fraud. Given the basis for creating a PMP relationship between trustworthy advertisers and publishers, it seems like this could be the case. Unfortunately, the inherent flaws in programmatic open marketplaces still manage to trickle down to PMP transactions.

What is a PMP?

A private marketplace is a digital marketplace where programmatic advertising transactions can take place between trusted and exclusive parties. Within PMPs, a publisher can invite an advertiser to participate in a private auction that occurs in a Real-Time Bid (RTB) environment. PMPs are essentially a programmatic relationship between a publisher and one or more advertisers, attempting to create a clean and trusted environment for premium ad space to be bought and sold.

PMPs are like a hybrid between traditional direct buying and programmatic. The open-ended nature of the auction is reduced by limiting the crowd of buyers. On the flip side, the benefits of RTB environments are still leveraged in order to facilitate an automated and seamless bidding process.

PMP image

Why should advertisers use a PMP?

The use of PMPs has advantages and disadvantages. On the upside, advertisers can ensure they are paying for premium inventory on trusted publishers and inventory slots can be hand-picked for maximum consumer impact. Plus, along with the benefits of both programmatic and direct buying, content suitability and viewability metrics are assured to be positive—and fraud avoidance is also a likely result.

On the other hand, participation for advertisers in PMPs comes at a premium price point. PMPs also require significant manual interaction and are more time consuming than an open exchange, and there’s no guarantee of ad relevance or having a captive audience.

The assumption is that PMPs assure that advertisers can purchase premium inventory (albeit at premium costs), have their ads prominently displayed adjacent to appropriate content with positive viewability metrics, and lack typical fraud generating behavior.

Due to the benefits of brands being able to protect their reputations with more meaningful ad placements, PMPs have significantly grown in usage in recent years. This is why it is even more crucial to get to the core of the “fraud-free” assumption and understand what’s really at stake beyond placement.

The truth behind fraud in PMPs

To put it simply, PMPs can’t completely prevent fraud. Having evaluated IAS PMP data since the beginning of 2023, we discovered that PMP fraud occurs about 19% as often as it does on an open exchange, indicating there is truth to a lower volume of fraud comparatively. However, we also noted that the CPMs of fraudulent PMP transactions were consistently about 6% higher than they were for non-fraudulent PMP transactions. 

While there’s certainly something to be said about a reduced fraud rate in PMPs, fraud actors are clearly aiming for higher CPMs with specific targeting of PMP transactions.

How could this be the case? Unsurprisingly, it all comes back to the thorn in every advertiser’s side: Bots. While the relationship between the advertiser and publisher may be trusted, this still does not prevent bots from eating up impressions by “visiting” any given domain. Plus, even trusted publishers may source traffic — and if that source hasn’t been fully vetted, it’s highly likely to contain fraudulent traffic.  And a universal truth is that benign bots (or general invalid traffic) are mostly welcomed for services like SEO and indexing, which creates a significant volume of invalid impressions in PMPs, leading to direct losses for advertisers who participate in PMPs as a substitute for ad verification.

In addition to the significant portion of general invalid traffic in PMPs, we also noted a wide array of sophisticated invalid traffic, like user-agent spoofing, device spoofing, geo spoofing, and anomalous behavioral deviation.

By correlating these factors, we were also able to identify likely cohesive botnet traffic, indicating concerted efforts of fraud.

What can you do to avoid PMP fraud?

That’s where IAS can help. Our unique three-pillar approach uses machine learning and unmatched scale to provide the most accurate detection and prevention while most solutions rely on a simple automated check. The industry-leading fraud detection technologies offered by IAS continue to attract significant investment from marketers.

IAS captures up to 280 billion interactions daily from around the world, and trillions of data events are measured each month globally. We provide real-time fraud prevention for programmatic buys with pre-bid targeting segments that leverage MRC-accredited fraud technology. To learn more about our fraud prevention solutions, contact your IAS representative.

The post The State of Fraud in Private Marketplaces appeared first on Integral Ad Science.

]]>
IAS Threat Lab collaborates with Google to take down malicious app https://integralads.com/insider/threat-lab-google-take-down-malicious-app-oko-vpn/ Mon, 15 May 2023 13:05:45 +0000 https://integralads.com/insider/threat-lab-google-take-down-malicious-app-oko-vpn/ The digital landscape is prioritizing privacy now more than ever before. Internet users worldwide understandably want to ensure security when browsing the web. However, fraud can be found anywhere – even in apps that may identify themselves as safe.

The post IAS Threat Lab collaborates with Google to take down malicious app appeared first on Integral Ad Science.

]]>

The digital landscape is prioritizing privacy now more than ever before. Internet users worldwide understandably want to ensure security when browsing the web. However, fraud can be found anywhere – even in apps that may identify themselves as safe.

The Scheme:

The IAS Threat Lab recently uncovered an elaborate fraud scheme in a virtual private network (VPN) app targeting Android phones called Oko VPN. Developed by VIP Internet Security LTD., the app was labeled as a free VPN service that anonymizes a user’s web traffic and made available in the Google Play Store in July 2022. 

In reality, Oko VPN was hijacking IP addresses, turning users’ phones into fraud-relaying devices. Any Android phone that installed the app unwittingly donated its IP address for use by Oko VPN to commit ad fraud. The fraudsters exploited the user’s IP address to mask the origin of traffic to send fake ad impressions to video streaming platforms. This IP hijacking scheme is referred to as “residential proxying.” 

This app also posed a risk for illicit material/traffic going through users’ home networks, making it possible to make further attacks on users’ home networks – which emphasized the need to remove the app from the Google Play Store immediately.

The Takedown:

Upon detecting the malicious app in March 2023, the IAS Threat Lab contacted the Google Play Store team who conducted their own investigation and confirmed the Threat Lab’s findings. After the Threat Lab identified the scheme, IAS notified Google, which immediately removed the app and enforced Google Play Protect, which warns users and prompts them to uninstall the malicious app.

The Impact:

Oko VPN experienced exponential growth, with more than a million users at the time of its takedown. The Threat Lab team estimates that Oko VPN was generating approximately 100 million fraudulent impressions per month at the time of its removal from the Google Play Store. The team estimates that $10 million in advertiser spend was wasted on this scheme.

Fraud schemes like this are unfortunately quite common – and advertisers need to be aware. The IAS Threat Lab is constantly working to identify new and novel fraud schemes, protecting advertisers, publishers, and consumers from digital ad fraud. 

IAS established the Threat Lab to provide targeted reconnaissance of new and emerging fraud schemes. The team employs data analysis and reverse engineering to uncover fraud schemes and determine how they work, which allows the team to protect advertisers, publishers and consumers, by working with partners and authorities to take down the fraudsters.

For details on the scheme, download the Technical Disclosure: Oko VPN.

The post IAS Threat Lab collaborates with Google to take down malicious app appeared first on Integral Ad Science.

]]>