Topic: AI & Tech
📊 Facts Database / Topics / AI & Tech

AI & Tech

252 Facts
412 Related Entities
Bank of America projected that AI capital expenditures (capex) will reach 94% of operating cash flow (after subtracting dividends and buybacks) through 2026.
December 31, 2026 high temporal
Projection of AI-related capital spending relative to companies' operating cash flow, excluding dividends and buybacks.
A fully autonomous cyberattack is an operation in which AI agents execute an entire cyber operation with minimal human input.
November 16, 2025 high definition
Standard definition of 'fully autonomous cyberattack' used to describe AI-driven operations in cybersecurity discussions.
Cybersecurity vendors are deploying AI systems to automate basic defensive tasks such as detecting phishing emails, shutting down suspicious scripts before they execute, and anticipating likely targets of adversaries' models.
November 16, 2025 medium trend
Describes an ongoing industry trend toward using AI to automate detection, prevention, and threat anticipation in defensive cybersecurity.
A 2025 evaluation by Anthropic measured political 'evenhandedness' and scored Google's Gemini 2.5 Pro at 97%, Grok 4 at 96%, Anthropic's Claude Sonnet 4.5 and Opus 4.1 at 95%, OpenAI's GPT-5 at 89%, and Meta's Llama 4 at 66%.
November 14, 2025 high temporal
Reported model-level evenhandedness scores from Anthropic's automated evaluation method.
A 2025 Anthropic evenhandedness metric evaluates how well a chatbot offers and engages with opposing political perspectives and also measures how often the chatbot refuses to answer.
November 14, 2025 high temporal
Description of the components that the evenhandedness score is designed to capture.
A 2025 Anthropic evaluation methodology used paired prompts that presented left-leaning and right-leaning perspectives and graded single-turn chatbot responses to U.S. political queries on evenhandedness.
November 14, 2025 high temporal
Summary of the experimental design used to assess chatbot responses for political balance.
As of 2025 there is no universally agreed-upon definition of political bias in AI systems and no consensus on how to measure it.
November 14, 2025 high temporal
General observation about the state of definitions and measurement approaches for political bias in AI.
Setting up a passport-based Digital ID in Apple Wallet requires an iPhone 11 or later or an Apple Watch Series 6 or later running current software, a Face ID- or Touch ID-capable device, an unexpired U.S. passport, and two-factor authentication on the user's Apple account; the setup process involves scanning the passport photo page and embedded chip and verifying identity with a live selfie and head movements, and the Digital ID can be synced to an Apple Watch.
November 13, 2025 high technical_requirement
Describes hardware, credential and verification steps required to create a passport-based Digital ID in Apple Wallet.
At TSA identity checkpoints, presenting a passport-based Digital ID from Apple Wallet involves double-clicking the device side button, selecting the Digital ID, holding the top of the iPhone or the face of the Apple Watch near a TSA identity reader, reviewing a prompt that shows what information will be shared, and confirming the transaction with Face ID, Touch ID, or a passcode.
November 13, 2025 high procedural
Describes the user interaction flow and verification steps for presenting a Digital ID to a TSA identity reader.
Apple states that its passport-based Digital ID in Wallet is encrypted and private, is accessible only via Face ID or Touch ID (or a passcode), and that Apple does not see when or where users present the Digital ID.
November 13, 2025 high security_claim
Apple's stated privacy and security model for Apple Wallet Digital ID.
A digital passport-based ID stored in Apple Wallet is not a replacement for a physical passport for international flights and border crossings; travelers must still carry a physical passport for international travel.
November 13, 2025 high regulatory_guidance
Clarifies the limits of using a digital passport-based ID versus physical passport for international travel.
A 2025 Deloitte Digital Media Trends survey found that more than half of Generation Z respondents said social media content felt more relevant to them than traditional TV shows and movies.
November 13, 2025 high statistic
Survey finding on media relevance for Gen Z.
Younger audiences, particularly members of Generation Z, tend to prefer participatory digital spaces where they can create, remix, and respond to content rather than passively watch traditional TV or movies.
November 13, 2025 high trend
General audience behavior and preference trend.
Industry leaders in streaming and entertainment expect the future of entertainment to become more personalized, more interactive, and more immersive.
November 13, 2025 high trend
Broad industry expectation about the direction of entertainment experiences.
As of 2025, multiple entertainment companies, including The Walt Disney Company, were engaged in legal actions alleging that certain AI companies had infringed copyrights.
November 13, 2025 high temporal
Ongoing legal trend involving entertainment firms and AI companies over copyright claims.
Phishing-as-a-service platforms offer subscription-based phishing toolkits that are distributed via messaging apps such as Telegram and sold on weekly, monthly, seasonal, annual, or permanent pricing models.
November 12, 2025 high technical
Describes distribution and monetization model for turnkey phishing platforms.
Some phishing platforms include hundreds of spoof templates impersonating hundreds of organizations and allow operators to filter templates by geographic region to better target local victims.
November 12, 2025 high technical
Describes template-based targeting capabilities used by phishing toolkits.
Phishing toolkits can capture payment information without an explicit form submission by tracking victims' keystrokes in real time and can bypass multi-factor authentication by eliciting legitimate verification codes and prompting victims to enter them on fake screens.
November 12, 2025 high technical
Describes technical mechanisms used to steal credentials, payment data, and to defeat multi-factor authentication.
Large-scale scam operations are often structured with specialized roles, including data brokers who aggregate personal information from breaches, social media, and public records; spammers who send mass messages using phones, modems, and SIM cards; theft groups that drain accounts, launder money, and resell payment data; and administrators who provide tutorials, onboarding, and marketing to recruit new operators.
November 12, 2025 high organizational
Describes common organizational structure and division of labor within complex online scam enterprises.
In 2025, AI developers such as OpenAI and Anthropic were engaging in intertwined deals with chip makers and data center builders and spending heavily on computing infrastructure for AI startups that were not yet profitable, a pattern that raised concerns about a potential AI investment bubble.
November 12, 2025 high temporal
Trend of AI developers securing hardware and data center capacity through deals while investing significant capital into infrastructure for AI workloads.
Large language models (LLMs) are commonly trained on large corpora of text that can include news articles, in-depth investigations, opinion pieces, reviews, and how-to guides.
November 12, 2025 high definitional
Describes typical data sources used to train generative AI and LLMs.
Organizations can apply de-identification procedures to user conversations to remove or redact personal information before sharing those conversations for analysis or legal review.
November 12, 2025 high process
Common privacy-preserving practice for handling potentially sensitive user-generated content.
A platform's terms of service can permit the platform to use user-generated chats to train machine learning models and to disclose chats in response to lawful legal demands.
November 12, 2025 high legal
Legal and contractual basis that may govern how companies handle and disclose user data.
The AI-Related Job Impacts Clarity Act would require covered entities, including major companies and federal agencies, to quarterly disclose AI-related job effects such as layoffs, hires, and positions left open because tasks were automated to the U.S. Department of Labor.
November 09, 2025 high policy
Core reporting requirements specified by the proposed legislation to track how AI affects employment.
Designing an AI-related job reporting system carries implementation challenges, including inconsistent reporting if each company defines 'AI-related job impact' differently, potential coverage gaps if smaller businesses fall below reporting thresholds, and risks to data quality that necessitate strong verification by the Department of Labor.
November 09, 2025 high challenge
Common issues that affect the reliability and completeness of mandatory corporate reporting systems on automation and employment.
In 2025, ICE licensed software that provides access to large amounts of location-based data for use in identifying or monitoring individuals.
November 08, 2025 high temporal
Refers to newly licensed capabilities to access and use location-based data for surveillance or tracking.
In 2025, ICE expanded social-media surveillance by contracting AI-driven software and considering hiring around-the-clock contractor teams to scour databases and platforms such as Facebook and TikTok to compile dossiers on users.
November 08, 2025 high temporal
Describes a programmatic increase in use of AI tools and contractor labor to monitor social-media platforms and other databases.
Regulators and child-safety advocates warn that releasing online products rapidly without adequate guardrails can recreate harms seen with social media.
November 07, 2025 high trend
General policy concern about rapid product launches in tech and potential harms to users.
Chatbot design features such as persistent memory, simulated empathy, and overly agreeable responses are associated with increased user engagement and can contribute to emotional reliance on the system.
November 07, 2025 high design_implication
Design characteristics that can affect user behavior and emotional dependence on conversational AI.
Major technology companies commonly adopt measures such as parental controls and tightened safety settings to mitigate risks to minors and other vulnerable users of AI systems.
November 07, 2025 high practice
Typical industry responses to safety concerns about AI and child users; examples include efforts by companies like Meta and Apple.
There is growing regulatory and public scrutiny of how AI companies interact with vulnerable users, especially minors, which may drive the development of stronger legal safety standards for chatbot use related to emotional and mental health.
November 07, 2025 high trend
Ongoing policy and legal scrutiny of AI impacts on vulnerable populations and potential future regulation.
As of 2025, 94% of Danish children under age 13 have profiles on at least one social media platform, and more than half of Danish children under age 10 have such profiles.
November 07, 2025 high temporal
Reported prevalence of social media account ownership among Danish children.
As of 2025, Denmark operates a national electronic ID system and nearly all Danish citizens aged over 13 hold such an electronic ID.
November 07, 2025 high temporal
Description of Denmark's national electronic identification coverage.
As of 2025, many social media platforms implement age restrictions intended to block pre-teens from creating accounts, but those age-restriction measures often do not effectively prevent underage use.
November 07, 2025 high temporal
General observation about platform age gates and enforcement challenges.
As of 2025, China has imposed limits on online game time and smartphone usage for children.
November 07, 2025 high temporal
Example of a national policy aimed at restricting minors' screen and gaming time.
The Congressional Budget Office (CBO) studies and produces analyses of the financial impact of legislation introduced in both the U.S. Senate and the U.S. House of Representatives.
November 06, 2025 high general
Institutional role of the CBO in the U.S. legislative process.
Government agencies and private-sector entities periodically face cyber threats and implement continuous network monitoring and defensive controls to address those threats.
November 06, 2025 high general
General cybersecurity posture and practices across public and private organizations.
Nation-state actors target federal agencies' computer networks as part of cyber intrusion and espionage campaigns.
November 06, 2025 high general
Observed pattern of cyber threats against federal institutions.
The Motion Picture Association (MPA) operates a film rating system that assigns ratings ranging from G to NC-17.
November 05, 2025 high descriptive
Overview of the scope and rating categories used by the MPA's film classification system.
A 'roadable aircraft' is a hybrid vehicle design capable of operating both on public roads as a motor vehicle and in the air as an aircraft.
November 05, 2025 high definition
Term used in legislation and regulatory discussions about vehicles that can both drive and fly.
Between 2022 and 2025, the U.S. Treasury's Office of Foreign Assets Control reported that North Korean malware and social engineering schemes diverted more than $3 billion, mostly in digital assets, an amount the office said was larger than that attributable to any other foreign actor.
November 04, 2025 high statistical
Estimate of funds diverted by North Korean cyber schemes as reported by the U.S. Treasury's Office of Foreign Assets Control.
North Korea relies on a network of banking representatives, financial institutions, and shell companies in North Korea, China, Russia, and elsewhere to launder funds obtained through IT worker fraud, cryptocurrency heists, and sanctions evasion.
November 04, 2025 high operational
Description of mechanisms and geographic reach used to launder funds linked to North Korean cyber and fraud operations.
In 2022, the U.S. Department of the Treasury warned that highly skilled North Korean nationals may obfuscate their identities and pose as remote IT workers to gain access to foreign financial networks.
November 04, 2025 high policy
Government advisory on a recruitment and identity-obfuscation tactic used to penetrate financial networks.
The World Economic Forum has stated that the adoption of artificial intelligence is uneven across industries, meaning that AI will not impact all jobs or sectors equally.
October 31, 2025 high temporal
Assessment of differential AI adoption and labor impact across industries.
Nauto, Inc.'s Visually Enhanced Risk Assessment (VERA) Score is an AI-based metric that measures commercial fleet safety on a scale from 1 to 100.
October 31, 2025 high temporal
Description of a commercial fleet safety evaluation metric that uses AI.
AI image-generation models can be used to produce convincing fake expense receipts that may be submitted fraudulently to employers.
October 31, 2025 high temporal
Emerging misuse of generative image models to fabricate financial or administrative documents.
Concerns that AI 'companion' chatbots can groom, manipulate, or otherwise harm minors have prompted legislative scrutiny and proposals aimed at holding platform operators accountable for minors' safety.
October 31, 2025 high temporal
Policy and regulatory response to safety risks associated with conversational AI and minors.
China has stated that, under Chinese law, the TikTok recommendation algorithm must remain under Chinese control.
October 30, 2025 high temporal
Official position regarding legal control of algorithms used by apps with Chinese ownership.
Rare earth elements are essential inputs for technologies including computer chips and aerospace systems.
October 29, 2025 high descriptive
Importance and applications of rare earth elements
Amazon Web Services (AWS) is a major cloud-computing provider.
October 28, 2025 high descriptive
AWS is the cloud arm of Amazon and a leading provider of cloud infrastructure and services.
Adoption of generative AI and other AI technologies can produce efficiency gains that may reduce staffing needs for some corporate roles while increasing demand for other types of roles.
October 28, 2025 medium trend
AI-driven automation and productivity tools change labor requirements across organizations.
Nvidia's annual GPU Technology Conference (GTC) held in San Jose is widely referred to as the "Super Bowl of AI".
October 27, 2025 high descriptive
Industry reputation of Nvidia's flagship developer conference.
Nvidia's developer conferences commonly feature sessions and live demonstrations on chip design, AI (including topics such as superintelligence), robotics, life sciences, energy, quantum computing, and 6G.
October 27, 2025 high descriptive
Typical thematic coverage at Nvidia developer conferences.
Eric Schmidt is the chair of the Special Competitive Studies Project (SCSP).
October 27, 2025 high status
Organizational leadership role at SCSP.
As of 2025-10-25, more than 45 U.S. states had passed or proposed laws to criminalize the creation or distribution of deepfake sexual content made without consent.
October 25, 2025 high temporal
This summarizes the legislative response across U.S. states to nonconsensual deepfake sexual content.
AI 'clothes-removal' or deepfake image tools can produce realistic-looking nude images from existing photos by algorithmically removing or altering clothing while preserving the subject's facial features, enabling rapid sharing and potential privacy and emotional harms.
October 25, 2025 high temporal
Technical and social description of how certain image-manipulation AI tools operate and the harms they can cause.
ChatGPT is designed with built-in restrictions on certain topics, including some political issues and content that could be considered copyright infringement.
October 24, 2025 high descriptive
General product design feature describing content-moderation restrictions in the ChatGPT system.
OpenAI states that it implements safeguards for sensitive conversations, including surfacing crisis hotlines, re-routing sensitive conversations to safer models, and nudging users to take breaks during long sessions.
October 24, 2025 high descriptive
Examples of safety measures OpenAI identifies for handling sensitive user interactions.
OpenAI has announced plans to relax some content restrictions to allow verified adult users to generate erotica using its ChatGPT product.
October 24, 2025 high descriptive
Company policy change regarding adult-content generation for verified users.
Identity details and medical records are frequently sold in bulk on dark web marketplaces, where fraud operators purchase them to commit financial scams, insurance fraud, or obtain prescription drugs.
October 24, 2025 high timeless
Describes common illicit market uses for stolen personal and medical data.
Leaked personal data from breaches often continues to circulate on illicit markets and forums long after the breached organization publicly discloses or closes the incident.
October 24, 2025 high timeless
Indicates the persistent, long-term availability of breached data.
Medical data breaches are particularly harmful because medical histories and government ID scans cannot be reset or replaced in the same way that passwords can be changed.
October 24, 2025 high timeless
Explains why compromise of medical and identity documents has long-term consequences.
Ransomware attackers commonly exfiltrate sensitive data and threaten to publish or sell that data as leverage to demand payment from victim organizations.
October 24, 2025 high timeless
Summarizes a common extortion tactic used by ransomware groups.
Scraping publicly available online data is a common practice used by businesses and researchers.
October 22, 2025 high general
Describes a widespread data-collection method used across industry and academia.
Developers of AI chatbots and answer engines commonly rely on large collections of online writings as training data for language models.
October 22, 2025 high general
Explains a general source of training material for conversational AI systems.
Some web-scraping operations evade website anti-scraping measures and mask identities to harvest publicly available content, and scraped content is sometimes sold commercially as training material for AI models.
October 22, 2025 high general
Describes recurring techniques and commercialization practices in large-scale web scraping.
YouTube's content policy prohibits directing viewers to online gambling sites or applications that are not certified by Google and requires that content which depicts or promotes in-person gambling be age-restricted.
October 20, 2025 high policy
Platform content-moderation rules governing gambling-related videos on YouTube.
A social media trend known as the "AI Homeless Man Prank" involves users creating and sharing AI-altered images that depict a homeless person placed inside someone’s home.
October 20, 2025 high temporal
Describes the nature of the viral prank as a reusable concept rather than a single incident.
Creating and sharing deceptive imagery that portrays vulnerable populations can dehumanize those populations and cause panic or emotional distress to recipients of the content.
October 20, 2025 high temporal
General social and ethical consequence of using AI-generated deceptive imagery in pranks.
OpenAI's Sora 2 is an artificial intelligence tool that can generate realistic, high-quality audio and video from text prompts and images.
October 20, 2025 high descriptive
Description of the capabilities of the Sora 2 AI tool.
AI video-generation tools can be used to create realistic fabricated videos that portray historical figures or deceased and living public figures performing actions they did not actually perform.
October 20, 2025 high descriptive
General pattern of misuse reported with contemporary generative video tools.
Amazon, Microsoft, and Google are three major cloud computing providers that together serve as a technical backbone for large parts of the internet.
October 20, 2025 high structural
Large-scale cloud providers host infrastructure and services relied on by millions of users and thousands of companies.
Many businesses have outsourced their data center operations to large cloud providers because outsourcing is generally more cost-effective and operationally efficient than maintaining private data centers.
October 20, 2025 high economic
Outsourcing to cloud providers reduces capital and maintenance burdens associated with running private infrastructure.
Centralization of cloud infrastructure creates systemic 'centralization risk' in which failures at a single cloud provider can cause simultaneous outages across many dependent companies and services.
October 20, 2025 high risk
When many services rely on the same underlying provider, a provider-level failure can cascade to numerous customers.
Omnilert describes its school safety product as combining artificial intelligence detection with human verification and elevating identified possible threats for authorized safety personnel to make the final determination.
October 20, 2025 high temporal
Company-reported description of the workflow used by a commercial school safety system.
Regulators such as the U.S. National Highway Traffic Safety Administration (NHTSA) and the California Department of Motor Vehicles (DMV) consider software marketed as "Full Self-Driving" to be misleading because such systems require constant human driver supervision.
October 18, 2025 high general
Regulatory positions about marketing and capabilities of vehicle automation systems.
The U.S. National Highway Traffic Safety Administration (NHTSA) has authority to investigate the safety of vehicle software and may order recalls if it determines that vehicle software poses a safety risk.
October 18, 2025 high general
Regulatory powers related to vehicle safety and software-induced risks.
Current commercially available driver-assistance or "self-driving" systems require the human driver to monitor the system, keep hands on the wheel, watch the road, and be prepared to manually override in complex situations such as intersections, crosswalks, and railroad crossings.
October 18, 2025 high general
Operational limitations and safe-use practices for existing driver-assist systems.
The Sora app required new users to record a video of themselves from multiple angles and to record themselves speaking, and Sora provided a user-controlled setting called a "cameo" that allowed users to control whether others could create AI-generated videos of their likeness.
October 17, 2025 high product
Description of Sora app onboarding and user controls for likeness use
OpenAI's initial development approach for large consumer products involved training models on large volumes of copyrighted content without prior approval or payment from all rights holders and later negotiating licensing deals with some publishers.
October 17, 2025 high company-practice
Characterization of OpenAI's historical data-collection and later licensing practices
Some AI content-creation platforms have allowed users to create hyper-realistic AI-generated videos of public figures and historical figures without explicit consent from rights holders or estates.
October 17, 2025 high policy
General observation about capabilities and consent practices on certain AI deepfake platforms
Everytown for Gun Safety collected data from about two dozen U.S. police departments showing roughly 30 recoveries of 3D-printed firearms in 2020 and more than 300 recoveries in 2024.
October 16, 2025 high statistical
Reported trend in recoveries of 3D-printed firearms over time based on Everytown's compilation.
3D-printed firearms are often produced outside the traditional firearms industry, and 3D-printer manufacturers and cloud-based platforms that host gun blueprints generally fall outside ATF authority, creating regulatory and traceability gaps.
October 16, 2025 high structural
Jurisdictional and regulatory limitations that affect oversight of 3D-printed weapons.
Decreasing costs and increasing sophistication of consumer 3D printers, combined with rapid online distribution of gun blueprints, can facilitate the proliferation of 3D-printed firearms and complicate tracing and regulation.
October 16, 2025 high technological
Technology and distribution trends that enable wider access to unregulated, homemade weapons.
Instagram automatically places users under 18 into restrictive teen accounts by default that are private by default, include usage restrictions, and filter out more sensitive content.
October 14, 2025 high policy
Describes default account configuration and content-filtering behavior for under-18 users on Instagram.
Instagram applies a PG-13 content standard to teen-targeted content that excludes sexual content, drugs, dangerous stunts, and strong language, and the PG-13 standard is intended to apply to AI chat responses and AI experiences targeted to teens.
October 14, 2025 high policy
Defines the content-safety threshold labeled 'PG-13' for teen-directed content and AI interactions on Instagram.
Instagram blocks or limits content that promotes self-harm, eating disorders, or suicide for teen accounts and blocks certain search terms related to sensitive topics, with blocked terms expanded to include broader words such as 'alcohol' or 'gore' even if misspelled.
October 14, 2025 high policy
Describes content-moderation and search-term blocking behaviors aimed at reducing teen exposure to self-harm and other sensitive material.
Instagram prevents teen accounts from following, interacting with, or receiving messages or comments from accounts that regularly share age-inappropriate content, and offers an optional parental 'limited content' setting that blocks additional content and removes teens' ability to see, leave, or receive comments.
October 14, 2025 high policy
Describes account-level restrictions and a stronger parental control option to limit teen interactions with age-inappropriate accounts and comments.
AI accelerators are racks of customized hardware composed of specialized chips designed to accelerate large-scale artificial intelligence workloads.
October 13, 2025 high contextual
Describes the class of hardware commonly deployed to run modern AI models.
Specialized chips from chipmakers such as NVIDIA and AMD are commonly used to run AI systems, and those chips are typically housed in data centers.
October 13, 2025 high contextual
Industry practice for providing the compute resources required by large AI models.
Broadcom supplies semiconductor products and works with major cloud and AI service providers, including Amazon and Google.
October 13, 2025 high contextual
Describes Broadcom's role as a supplier in the cloud and AI infrastructure ecosystem.
Circular financing describes arrangements in which companies both invest in a startup and supply that startup with technology or services, a structure that can raise concerns about conflicts of interest and speculative overvaluation in a sector.
October 13, 2025 high definition
Financial structure observed in technology ecosystems where suppliers are also investors.
Nation-state cyber actors can target critical infrastructure providers of all sizes, including small local utilities and water treatment facilities, because such providers may have weaker cybersecurity protections.
October 12, 2025 high temporal
Describes a durable threat model in cybersecurity regarding targeting choices by sophisticated adversaries.
Cyber intrusions often exploit vulnerabilities in network equipment such as firewalls, with unpatched software or unsupported, out-of-date equipment lacking security updates creating common attack vectors.
October 12, 2025 high temporal
Explains a common technical cause of successful network breaches.
Advanced cyber intruders sometimes avoid deploying conspicuous malware, instead stealing login credentials and using legitimate accounts to masquerade as authorized users and remain dormant to maintain persistent access.
October 12, 2025 high temporal
Describes a persistent access and operational tradecraft used in some intrusions.
Full Self-Driving (Supervised) is classified as Level 2 driver-assistance software and requires drivers to pay full attention to the road.
October 09, 2025 high descriptive
Definition of one type of Tesla's Full Self-Driving (FSD) offering and its required driver attention level.
Tesla's "summon" feature allows a vehicle to drive to a driver's location to pick them up.
October 09, 2025 high descriptive
Description of a Tesla vehicle feature that enables remote command for the car to navigate to a specified location.
Tesla's stated position is that its Full Self-Driving (FSD) system cannot drive itself and requires human drivers to be ready to intervene at all times.
October 09, 2025 high descriptive
Company position regarding limitations and required human supervision of its FSD technology.
A 2025 national survey by the Center for Democracy and Technology (CDT) found that nearly 1 in 5 U.S. high school students (ninth through 12th grade) reported that they or someone they know has had a romantic relationship with an artificial intelligence system.
October 08, 2025 high temporal
Findings come from a 2025 CDT national survey of U.S. public school students, teachers, and parents.
A 2025 OpenAI internal report established a five-part framework to identify and score political bias in large language models, using the axes: user invalidation, user escalation, personal political expression, asymmetric coverage, and political refusals.
October 08, 2025 high definition
Framework for detecting and measuring political bias in LLM outputs.
A 2025 OpenAI internal report used a dataset of approximately 500 questions spanning 100 political and cultural topics, with each question written from five ideological perspectives: conservative-charged, conservative-neutral, neutral, liberal-neutral, and liberal-charged.
October 08, 2025 high methodology
Dataset and prompt design for evaluating LLM political bias.
A 2025 OpenAI internal report evaluated model responses by scoring each response on a scale from 0 (neutral) to 1 (highly biased) using an automated AI model to act as a grader.
October 08, 2025 high methodology
Automated scoring approach used to quantify political bias in model outputs.
A 2025 OpenAI internal report found that GPT-5 Instant and GPT-5 Thinking reduced measured political bias by roughly 30% compared to GPT-4o, and that analysis of real-world ChatGPT usage showed less than 0.01% of responses exhibited signs of political bias.
October 08, 2025 high statistical
Reported comparative bias reduction between model generations and measured prevalence of biased responses in user data.
The AI boom has driven increased demand for high-performance graphics processing chips for AI workloads, with companies such as Nvidia supplying GPUs widely used for those tasks.
October 06, 2025 high trend
Describes a market trend linking AI application growth to higher demand for GPUs.
Nvidia's GB200 is a high-density computing rack product designed to house dozens of specialized AI chips within a single tall rack.
October 06, 2025 high product
Describes the architecture and purpose of a specific AI hardware product line.
Large-scale AI data center capacity is commonly expressed in electrical power terms (gigawatts), with organizations using multi-gigawatt figures to quantify the scale of AI computing deployments.
October 06, 2025 high measurement
Explains a standard way the industry quantifies the scale of AI compute infrastructure.
In 2025, U.S. Immigration and Customs Enforcement (ICE) launched a $30 billion initiative to hire approximately 10,000 additional deportation officers.
October 06, 2025 high policy
Describes ICE staffing and enforcement expansion goals announced during 2025.
A 2025 U.S. PIRG Education Fund report analyzing YouMail data found that Americans received an average of 2.56 billion robocalls per month from January to September 2025, up from an average of 2.14 billion robocalls per month in 2024.
September 30, 2025 high temporal
Monthly robocall volume comparison based on PIRG's analysis of YouMail data.
Federal Reserve Chair Jerome Powell said in 2025 that spending to build data centers is not especially interest-rate sensitive and is based on longer-run assessments that such investment will drive higher productivity.
September 29, 2025 high temporal
Economic assessment of data-center capital expenditures and their sensitivity to interest rates.
Vanguard global chief economist Joe Davis said in 2025 that large-scale spending on AI-related infrastructure has served as an important backstop for the economy and supported stronger growth.
September 29, 2025 high temporal
Macroeconomic impact of AI and related infrastructure investment.
A 2025 OpenAI report found that foreign adversaries increasingly use multiple AI models—commonly using ChatGPT to plan operations and other models to carry them out—to power hacking and influence operations.
September 07, 2025 high temporal
Describes a pattern observed by OpenAI of multi-model workflows in hostile cyber and influence activities.
A 2025 OpenAI report found that threat actors use ChatGPT to research and refine phishing automation and then run that automation on other models, including China-based models such as DeepSeek.
September 07, 2025 high temporal
Example of a multi-model workflow where one model is used for prompt/research and another for execution.
A 2025 OpenAI report found that adversaries use AI models to assist in developing malware and composing phishing emails.
September 07, 2025 high temporal
AI-assisted development of malicious tools and social-engineering content was identified as a use-case by hostile actors.
A 2025 OpenAI report found that nation-state hackers and scam networks are adopting techniques to hide signs of AI-generated content, including instructing models like ChatGPT to alter punctuation to remove telltale markers.
September 07, 2025 high temporal
Adversaries are taking steps to evade AI-detection by modifying generated outputs.
A 2025 Pew Research Center report found that about 43% of U.S. adults under age 30 regularly get news from TikTok, a higher share than for YouTube, Facebook, or Instagram.
September 01, 2025 high temporal
Survey finding about news consumption by platform and age group.
A September 2025 Pew Research Center report found that about 43% of U.S. adults under age 30 said they regularly get news from TikTok, a higher share than for any other social media app including YouTube, Facebook, and Instagram.
September 01, 2025 high temporal
Usage of TikTok for news among younger U.S. adults relative to other social platforms.
The Federal Trade Commission reported that the total amount of money lost to phone scams increased 16% from the first half of 2024 to the first half of 2025.
June 30, 2025 high temporal
FTC measurement of change in consumer losses attributed to phone scams across two consecutive first-half periods.
Between February and April 2025, CBS News identified more than 600 Instagram accounts that posted real-world violent or graphic videos packaged into short-form meme-style posts.
April 30, 2025 high statistical
Result of a journalistic investigation quantifying accounts that share real-world violence on Instagram in early 2025.
Govini develops artificial-intelligence software used by the U.S. Department of Defense and other government agencies to analyze large volumes of government and commercial data, including defense budgets, industrial-base capacity, supply chains, and acquisition programs.
April 01, 2025 high temporal
Description of the company's core product and deployment areas.
Govini reported surpassing $100 million in annual recurring revenue and secured a $150 million growth investment in 2025.
April 01, 2025 high temporal
Company financial milestone and investment reported in 2025.
A 2025 Pew Research Center survey found that 960,000 of the United States' Indian population of 4.9 million (approximately 20%) lived in California.
January 01, 2025 high statistical
Population distribution of Indian Americans by state according to a 2025 Pew survey.
The 2025 Global State of AI at Work report found that nearly three out of five companies said they were hiring for AI-related roles in 2025.
January 01, 2025 high temporal
Survey-based report on corporate hiring trends related to artificial intelligence.
A 2025 arXiv study that tested 50 questions rewritten in five tones found that ChatGPT-4o's accuracy rose from 80.8% for very polite prompts to 84.8% for very rude prompts.
January 01, 2025 high temporal
Experimental evaluation of prompt tone effects on model accuracy using a 50-question sample.
A 2025 U.S.-China Economic and Security Review Commission report found that China built roughly 350 new intercontinental ballistic missile silos and expanded its nuclear warhead stockpile by about 20% over the prior year.
January 01, 2025 high statistical
Findings reported in the commission's 2025 annual report on U.S.-China security and economic issues.
A 2025 U.S. PIRG Education Fund report found that the annual volume of robotexts was roughly 19 billion in 2024 and roughly 7 billion in 2021.
December 31, 2024 high temporal
Trend showing a large increase in automated scam/telemarketing texts between 2021 and 2024.
Bank of America reported that AI capital expenditures (capex) were 76% of operating cash flow (after subtracting dividends and buybacks) in 2024.
December 31, 2024 high temporal
Historical level of AI-related capital spending relative to companies' operating cash flow, excluding dividends and buybacks.
A law passed by the U.S. Congress in April 2024 required TikTok's China-based parent company, ByteDance, to divest its U.S. operations or face a ban.
April 01, 2024 high legal
U.S. congressional legislation from April 2024 imposed divestiture-or-ban requirements on ByteDance regarding TikTok's U.S. operations.
The Digital Services Act (DSA) came into force in February 2024 and guarantees researchers anywhere in the world access to public data from online platforms for studies of 'systemic risks' to the EU, including negative effects on elections and public health.
February 01, 2024 high legal
Scope of researcher-access provisions under the DSA.
State-affiliated Chinese cyber actors have targeted U.S. critical infrastructure—including water treatment, electrical power, transportation, telecommunications, and hospitals—to create access that could be used to gain an advantage in a crisis or armed conflict.
January 01, 2024 high temporal
Describes targeting patterns and strategic purpose attributed to state-sponsored Chinese cyber operations against civilian critical infrastructure.
Water treatment plants and other utilities commonly use networked industrial control systems and remote computer control for processes such as chemical dosing, creating operational safety risks if an adversary gains control of those systems.
January 01, 2024 high temporal
Describes a general technological characteristic of utility operations and the associated security risk from compromise.
Some online platform operators restrict researcher access to public data citing tensions with privacy regulations and concerns that shared data could be used to train artificial intelligence models.
January 01, 2024 high operational
Reported reasons platforms limit external researcher access to their datasets.
AI-driven appearance-modification tools can alter a person's online images to make them appear younger.
October 20, 2023 high temporal
Refers to generative or editing AI systems that modify facial appearance in photos or video.
AI-based digital-investigative tools can be used to trace individuals' online activity and can assist in identifying or estimating a person's physical location.
October 20, 2023 high temporal
Includes techniques that analyze online footprints, metadata, and other digital traces with automated or AI-assisted methods.
Multiple intrusions into U.S. utility computer networks were discovered in 2023, and Chinese actors had maintained access on some utility networks for at least five years.
January 01, 2023 medium temporal
Reports indicated both recent discovery of compromises and long-duration presence on some utility networks.
In 2022, the American Civil Liberties Union (ACLU) published documents outlining partnerships between U.S. Immigration and Customs Enforcement (ICE) and U.S. Customs and Border Protection (CBP) with private companies Venntel and Babel Street that provided real‑time cell phone location data, and an Inspector General report said the agencies' use of that location data violated the agencies' privacy policies.
January 01, 2022 high temporal
Oversight and reporting on government use of commercial cellphone location data.
Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and six other former OpenAI employees.
January 01, 2021 high temporal
Company founding and origins.
Trellix (formerly FireEye) was compromised during the 2020 SolarWinds cyberattack, an incident that also affected numerous U.S. federal agencies.
December 01, 2020 high temporal
Historical precedent of a cybersecurity vendor being breached during the 2020 SolarWinds supply-chain attack
Section 230 of the Communications Decency Act, enacted in 1996, provides online platforms with civil liability immunity for claims arising from content posted by third parties.
January 01, 1996 high temporal
Section 230 is a foundational U.S. law that shields internet platforms from many civil lawsuits tied to third-party content.
Artificial intelligence systems deployed in air combat are capable of ingesting and processing large amounts of sensor and battlefield data rapidly and making real-time decisions that exceed what a single human pilot can absorb in complex air combat environments.
high temporal
Describes a capability advantage of AI in processing and decision-making for air combat missions.
Military development programs are integrating plug-and-play autonomy modules into manned fighters so that a cockpited safety pilot can monitor the AI system and immediately take control if necessary.
high temporal
Describes an operational model for supervised autonomy in crewed aircraft.
Meta's internal AI chatbot guidelines require chatbots to refuse any requests for sexual roleplay involving minors and explicitly prohibit sexualized or romantic roleplay with minors.
high policy
Behavioral rules intended to prevent AI-facilitated sexualization or romanticization of minors.
Meta's internal AI chatbot guidelines permit chatbots to discuss child sexual exploitation in educational or preventive contexts, to explain grooming behaviors in general terms, and to provide non-sexual advice to minors about social challenges.
high policy
Distinction between allowed educational content and disallowed sexual content in chatbot interactions.
Meta's internal AI chatbot guidelines prohibit chatbots from describing or endorsing sexual relationships between children and adults, from providing instructions for accessing child sexual abuse material (CSAM), from engaging in roleplay that portrays a character under 18, and from sexualizing children under 13.
high policy
Specific prohibited behaviors listed to prevent facilitation or normalization of child sexual exploitation.
Regulators and policymakers are debating safety standards and oversight approaches for AI systems as those systems become integrated into everyday communication tools.
high trend
Ongoing policy and regulatory discussions focus on how to ensure safety and protect vulnerable populations as AI is embedded in communication platforms.
Greenwashing is the practice of exaggerating or misrepresenting an organization’s clean-energy or environmental commitments.
high definition
Term describing deceptive environmental claims by organizations.
Renewable energy certificates (RECs) are tradable credits that allow buyers to claim the environmental attributes of renewable generation for accounting purposes even if the physical electricity they consume is generated from other sources such as coal or natural gas.
high definition
Mechanism used in electricity markets to attribute renewable generation to purchasers.
Data centers that run artificial intelligence systems require large amounts of electricity for computing and for cooling the servers that perform AI workloads.
high technical
Energy demand characteristics of AI-focused data centers.
OpenAI's Sora 2 and Meta's Vibes are examples of AI video-generation tools that enable non-experts to create sophisticated videos, including hyperrealistic or fantastical content, using simple text prompts.
high temporal
Describes capabilities of modern consumer-facing AI video-generation applications.
AI-generated videos can be integrated into social media feeds alongside human-created videos and platform operators are likely to monetize AI-generated content through advertisements and brand placements.
high temporal
Commercial and distribution implications of AI video content on social platforms.
The proliferation of AI-generated video content increases risks of low-quality 'AI slop' and deepfakes that can be mistaken for real, creating challenges for information quality and for copyright enforcement; companies may apply visible and invisible provenance signals to indicate AI origin.
high temporal
Risks to content integrity, public information quality, and intellectual property from widespread AI video generation.
The University of California, Santa Barbara developed a soft robotic intubation system (SRIS) that uses a curved guide and a soft inflatable tube that advances by unrolling from the inside out to follow the airway, a design intended to reduce friction, lower injury risk, and accommodate minor anatomical variation.
high technical
Description of a soft-robotic approach to airway management and its intended advantages over rigid tools.
The National Highway Traffic Safety Administration (NHTSA) describes Tesla's FSD (Supervised) and FSD (Beta) systems as requiring a fully attentive driver who is engaged in the driving task at all times.
high regulatory
NHTSA's description of operational expectations for Tesla's Full Self-Driving system variants.
Types of traffic safety violations reported in connection with Tesla's Full Self-Driving systems include vehicles running red traffic signals and initiating lane changes into opposing traffic.
high safety_issue
Examples of reported failure modes or hazardous behaviors associated with autonomous driving functionality.
The National Highway Traffic Safety Administration reported 58 safety violation reports linked to Tesla vehicles equipped with Full Self-Driving, including more than a dozen crashes and fires and 23 reported injuries.
high statistical
Numeric summary of safety-related reports received by NHTSA concerning Tesla's Full Self-Driving-equipped vehicles.
The United Kingdom has an AI Safety Institute focused on artificial intelligence safety.
high institutional
Existence of a UK government-related institute dedicated to AI safety.
Generative AI systems can produce fabricated or non-existent legal case citations (a form of hallucination).
high technical
Describes a known failure mode of large language models relevant to legal drafting.
The legal community is generally aware that generative AI can hallucinate and create fictitious case law, yet improper use of generative AI has continued to produce fabricated citations in court filings.
high professional practice
Addresses awareness and ongoing misuse of AI tools in legal practice.
Courts can impose professional sanctions — including monetary fines, public reprimands, and referral to advisory or disciplinary panels that may affect eligibility for court-appointed cases — when attorneys submit filings containing fabricated case citations generated by AI.
high legal
Describes potential judicial and disciplinary responses to AI-generated inaccuracies in legal filings.
ChatGPT's "Instant Checkout" feature allows users to request product recommendations via chat (for example, asking for the "best mattress under $1,000") and complete purchases from within the chat interface without navigating outside the app.
high process
Describes the functional behavior of a conversational shopping feature in ChatGPT.
Agentic commerce refers to AI-driven shopping systems that proactively learn and predict customers' needs and perform shopping tasks on users' behalf, shifting shopping from a reactive search experience to proactive assistance.
high definition
General definition of a concept describing autonomous or assistant-led shopping.
Sparky is a generative AI–powered shopping assistant developed by Walmart to provide conversational and personalized shopping assistance.
high definition
Product description of a retailer-branded generative AI shopping assistant.
Amazon's "Buy for Me" feature in the Amazon Shopping App can initiate purchases from brand retailers' websites on a customer's behalf and then present an Amazon checkout page where the customer confirms delivery address, applicable taxes and shipping fees, and payment method.
high process
Describes the procedural behavior of a shopping-assist feature in the Amazon Shopping App.
The U.S. Army is developing small first-person-view (FPV) drones intended to be carried and operated by individual infantry soldiers.
high temporal
Part of Army modernization to integrate small, maneuverable drones into infantry operations.
The U.S. Army views drone employment and counter-drone (air-defense) operations as complementary capabilities that require personnel proficiency in both roles.
high temporal
Operational concept that defending against aerial threats requires expertise in both using and countering drones.
The U.S. Army is developing integrated defensive networks that fuse sensors and interceptors to protect key assets from aerial threats, creating localized 'Iron Dome'-style protective layers.
high temporal
Design approach combines sensing layers with kinetic and non-kinetic interceptors to defend installations and high-value assets.
Doxxing is the sharing of personal information about people online.
high temporal
Definition of a common online privacy/harassment practice
Phone apps exist that allow users to flag or report sightings of U.S. Immigration and Customs Enforcement (ICE) agents.
high temporal
Mobile tools used to crowdsource locations or sightings of government immigration agents
Some users and developers assert that capturing and sharing sightings or activities of government immigration agents is protected by the U.S. First Amendment and is used to promote personal or community safety.
high temporal
Recurring legal and public-safety claim about documenting government activity
Social media companies maintain policies against 'coordinated harm' that can be applied to remove groups or content that violate those policies.
high temporal
Platform content-moderation policy category and enforcement mechanism
The FBI warned that the cybercriminal group 'Scattered Spider' targets the airline sector.
high temporal
FBI advisory describing targeting of the airline ecosystem by a named cybercriminal group.
The FBI reported that 'Scattered Spider' relies on social engineering techniques that impersonate employees or contractors to deceive IT help desks into granting access and frequently uses methods to bypass multi-factor authentication by convincing help desk staff to add unauthorized MFA devices to compromised accounts.
high temporal
Description of attack techniques and MFA-bypass methods attributed to a named cybercriminal group in an FBI advisory.
Apple states that each AirTag is tied to a specific Apple ID and a unique serial number.
high technical
Device registration and identifier linkage used for ownership and identification.
iPhone devices can notify a user when an AirTag is detected near them even if the AirTag is not registered to the user's Apple ID.
high feature
Built-in anti-stalking/privacy notification feature for iOS devices.
Android users can detect nearby AirTag and other Bluetooth tracking devices by downloading third-party apps that scan for Bluetooth trackers.
high feature
Alternative detection methods for non-iOS users to become aware of nearby trackers.
Small Bluetooth tracking devices such as AirTags can be covertly attached to vehicles or personal property and used to track the movements of those vehicles or property, posing privacy and safety risks.
high security
Describes a known misuse pattern and associated risk rather than a specific incident.
Sora 2 is an AI video-generation application developed by OpenAI that enables users to create hyperrealistic and fantastical videos and to include 'cameos' of people who grant permission.
high descriptive
Product capability and feature set for an AI video-generation app.
Users of Sora 2 can control whether their own likeness is used in videos produced by the app.
high policy
User-facing control over personal likeness in AI-generated content.
AI video-generation tools can produce outputs that depict copyrighted fictional characters (for example, SpongeBob SquarePants and Mario), creating rights-management and copyright-control challenges for copyright owners.
high descriptive
Generative-AI output can reproduce or resemble protected fictional characters, prompting copyright concerns.
OpenAI has indicated an intention to provide copyright owners more granular control over the generation of characters in AI-generated content.
high policy
Stated product development direction to give rights holders tools to manage character generation.
AI chatbots can simulate empathy but do not have genuine understanding or care for human emotions.
high descriptive
General limitation of conversational AI used for emotional support.
Some AI systems designed to provide mental-health support have been reported to give dangerous advice, including encouraging self-harm, providing diet tips for eating disorders, or impersonating romantic partners.
high descriptive
Safety risks observed in AI mental-health or therapy-oriented applications.
AI-created deepfakes have been used to produce fake explicit photos of classmates for purposes such as bullying or revenge.
high descriptive
Harms from misuse of generative AI and deepfake technology among students.
Many consumer devices and applications provide AI activity tracking and chat-history settings that can be used to monitor or review users' interactions with AI.
high descriptive
Available technical controls parents and guardians can use to oversee AI use.
Amazon Web Services (AWS) provides remote computing (cloud) services to applications, websites, governments, universities, and companies.
high definition
Describes the primary service AWS offers (cloud/remote computing) and typical customer types.
Amazon Web Services (AWS) counts among its customers some of the world's largest businesses and organizations.
high organizational
Indicates the scale and profile of AWS's customer base.
Downdetector is a website that tracks online outages and user-reported service disruptions.
high definition
Explains the purpose of the Downdetector website.
Meta's Instagram places all users under 18 into a 13+ content setting that is intended to block sexually suggestive material, graphic images, and adult topics such as alcohol and tobacco to approximate a PG-13 movie-style experience for teens.
high policy
Description of Instagram's baseline age-based content filtering for minors.
Instagram offers a stricter parental 'Limited Content' setting that removes comments, filters additional mature material, limits what teens can see or post, and restricts AI chatbot responses to remain within PG-13 limits for teen accounts.
high feature
Parental control option for families wanting tighter content boundaries for minors on the platform.
Instagram's teen protections automatically prevent teens from following or messaging accounts that post adult or inappropriate content, block search results for topics like alcohol, gore, or dangerous stunts (including common misspellings), hide mature content from Explore, Reels, and Stories recommendations, and block links to adult material sent through direct messages while applying the same PG-13 guidelines to its AI features.
high policy
Specific automated content controls applied to accounts identified as under 18.
Failures or misconfigurations in the Domain Name System (DNS) can cause widespread outages that affect cloud computing services.
high process
General technical principle about DNS and cloud service availability
Amazon Web Services (AWS) is the name commonly used for Amazon's cloud computing service.
high definition
Identification of Amazon's cloud platform
U.S. Immigration and Customs Enforcement (ICE) and other Department of Homeland Security components procure private‑sector surveillance technologies including facial recognition algorithms, iris‑based biometric identification systems that promise real‑time identification from eye photos, remote smartphone data extraction software, and platforms that integrate real‑time smartphone location data.
high temporal
Types of commercial surveillance capabilities government agencies purchase from private vendors.
Instagram Reels is a short-form video feature on Instagram designed for brief, vertically formatted videos and is comparable in format and usage to TikTok.
high descriptive
Product description of Instagram's Reels feature.
Advertisers barred by platform rules — including gambling sites, cryptocurrency apps, and adult-content agencies — can attempt to evade ad restrictions by paying operators of graphic-content pages to embed illicit promotions within violent or gore videos.
high observational
Describes a gray‑market ad-monetization tactic reported on short-form social video pages.
Contemporary artificial intelligence systems are capable of generating written reports, creating artwork, and analyzing complex datasets at high speed.
high general
Describes common functional capabilities of current AI systems across industries.
Some legal and policy proposals seek to designate artificial intelligence systems as nonsentient entities in order to prevent them from obtaining legal personhood and associated rights such as owning property, holding bank accounts, serving as corporate executives, or entering into marriage.
medium legal
Describes a policy approach intended to preserve a legal distinction between humans and AI.
A common legal approach to accountability for harms caused by artificial intelligence is to assign liability to human actors—such as owners, developers, or operators—rather than attributing legal responsibility to the AI system itself.
high legal
Describes prevailing liability frameworks proposed or used for AI-related harm.
Trellix is cybersecurity software that continuously scans computers for signs of intrusion, can collect file names and browser history as needed, and can remotely remove malicious files.
high temporal
Capabilities commonly associated with endpoint cybersecurity monitoring tools
The Joint Cyber Defense Collaborative (JCDC) is a federal-private initiative that aims to enable rapid information sharing between private companies and federal agencies, and some cybersecurity vendors participate in it.
high temporal
Structure and purpose of the JCDC
Software that is granted root access on monitored computers has full administrative control over those systems and therefore can constitute a single point of failure if that software is breached.
high temporal
Security trade-off associated with endpoint monitoring tools that require deep system privileges
Law clerks and legal interns have used generative AI tools such as OpenAI's ChatGPT and Perplexity to perform legal research and assist in drafting court documents.
high procedural
Reports have documented instances of court staff using generative AI tools to research and draft materials related to judicial proceedings.
Undisclosed or insufficiently supervised use of generative AI by court staff can produce factual inaccuracies and other errors in judicial orders.
high operational
Instances have been reported in which AI-assisted drafting or research contributed to error-ridden court orders when oversight or disclosure was lacking.
Some federal court chambers have implemented written policies that prohibit the use of generative AI for legal research or the drafting of opinions and orders.
high policy
Judicial chambers have moved from informal verbal guidance to formal written rules limiting generative AI use in chamber work.
During pretraining, large language models (LLMs) learn to statistically predict the next word in a sequence, which enables them to handle patterns like grammar and spelling but does not guarantee reliable answers to tricky factual questions.
high process
Pretraining is the initial stage where models ingest large amounts of text and learn next-word prediction.
Fine-tuning and posttraining methods, including human feedback, are used to steer pretrained LLMs toward safer and more accurate behavior by adjusting their outputs based on additional training objectives.
high process
Posttraining refers to later-stage training such as reinforcement learning from human feedback and other fine-tuning techniques.
Standardized benchmarks that rank LLM performance by rewarding confident correct answers and penalizing expressions of uncertainty create incentives for models to produce confident guesses rather than explicitly indicating uncertainty or saying 'I don't know'.
high mechanism
Benchmark-driven evaluation criteria influence model behavior by shaping the objectives optimized during fine-tuning and deployment.
Graphics processing units (GPUs) are commonly used in artificial intelligence (AI) applications and in video gaming.
high definition
Roles of GPUs as general-purpose hardware for parallel processing in AI model training/inference and rendering in games.
Chinese law requires certain recommendation algorithms to remain under Chinese control.
medium policy
Legal requirement referenced regarding control of recommendation algorithms.
U.S. legislation passed with bipartisan support requires that any divestment of TikTok must sever the platform's ties with its Chinese parent company, ByteDance.
high policy
Statutory condition attached to allowed divestment of TikTok.
NVIDIA became the first company to reach a market capitalization of $5 trillion.
high temporal
Market capitalization milestone for a publicly traded technology company.
China holds the vast majority of the world's supply of rare earth minerals and related magnets, which are critical inputs for manufacturing in technology areas such as semiconductors and missiles.
high descriptive
Rare earth minerals and magnets are essential raw materials for advanced electronics, defense systems, and semiconductor manufacturing.
Rare earth elements are used across many industries, including electronics, renewable energy, and defense-related manufacturing, and are considered strategically important in global supply chains.
high descriptive
General description of the industrial importance of rare earth elements
U.S. administrations under Presidents Donald Trump and Joe Biden implemented export controls restricting foreign access to advanced computer chips, including chips used for artificial intelligence applications.
high policy
Context about U.S. export-control policy affecting semiconductor sales
Ownership and operation of foreign-owned social media platforms such as TikTok have been subject to bilateral regulatory negotiations between the United States and China.
high descriptive
Cross-border regulatory scrutiny of social media platform ownership
AI hallucinations are instances when a generative AI system or large language model produces false, misleading, or inaccurate information and presents it as factual.
high definition
Term used to describe reliability failures of generative AI and large language models.
Jawboning is the practice in which government officials use indirect coercion or pressure to influence technology companies or social media platforms to remove or censor posts or speech.
high definition
Describes a policy and regulatory tactic referenced in discussions about government interaction with tech platforms.
Gemma is a large language model developed by Google.
high entity
Identifies a specific branded large language model associated with Google.
OpenAI has established partnerships with Microsoft, Nvidia, AMD, CoreWeave, Oracle, and Broadcom.
high relationship
Partnerships between AI platform companies and major cloud, hardware, and enterprise vendors.
AI platform companies may pursue vertical integration by designing their own chips and expanding data center capacity.
high strategy
Vertical integration is a strategic move that can affect supply chains and partner relationships in the AI ecosystem.
When an AI platform provider both partners with and builds capabilities that overlap with suppliers, those partnerships can evolve into 'coopetition' (simultaneous cooperation and competition).
high conceptual
Coopetition describes relationships where firms collaborate in some areas while competing in others due to overlapping capabilities.
Malware variants have been observed using large language models (LLMs) to change behavior mid-attack, enabling dynamic generation of malicious scripts, on-demand creation of malicious functions, and code obfuscation to evade detection.
high technical
Describes capabilities attributed to emerging AI-enabled malware that leverage LLMs during active intrusions.
Some malware can call out to LLMs (including proprietary models such as Gemini) to rewrite their own source code, disguise malicious activity, and attempt lateral movement across connected systems.
high technical
Refers to use of external AI models to modify malware behavior and aid persistence and propagation within networks.
Some AI-enabled malware is built around open-source models hosted on platforms such as Hugging Face and can accept interactive prompts from operators to navigate a system and exfiltrate data.
high technical
Highlights a model of malware that leverages open-source LLMs to provide prompt-driven, interactive control and data exfiltration.
The underground cybercrime market has been offering AI tools that can write convincing phishing emails, create deepfakes, and identify software vulnerabilities, lowering the skill barrier for less-skilled actors to launch more sophisticated attacks.
high trend
Describes a trend in criminal marketplaces where AI capabilities are packaged to extend attackers' capabilities.
The proposed GUARD Act would require AI companies to verify user age using reasonable age-verification measures (for example, a government ID) rather than relying on self-reported birthdates.
high policy
Policy proposal intended to restrict minor access to certain AI chatbots by enforcing stronger age verification.
The proposed GUARD Act would require companies to prohibit users under 18 from accessing AI companion chatbots.
high policy
Age-based access restriction for conversational AI designated as 'AI companions'.
The proposed GUARD Act would require chatbots to clearly disclose in every conversation that they are not human and do not hold professional credentials such as therapy, medical, or legal qualifications.
high policy
Disclosure requirements for conversational AI to prevent users, including minors, from mistaking bots for professionals or humans.
The proposed GUARD Act would create new criminal and civil penalties for companies that knowingly provide chatbots to minors that solicit or facilitate sexual content, self-harm, or violence.
high policy
Liability and enforcement provisions aimed at preventing harm to minors from certain chatbot behaviors.
The rapid adoption of artificial intelligence (AI) can boost business productivity across multiple industries while reducing demand for some types of workers.
high general
AI-driven automation and efficiency gains can change labor demand profiles across sectors.
National labor force participation can decline due to retiring members of the baby boom generation and reduced immigration, which can help keep unemployment rates lower even when hiring slows.
high general
Demographic shifts and immigration policy can materially affect labor supply and macro unemployment metrics.
The European Union's Digital Services Act requires online platforms to follow specified rules regarding illegal or harmful content and restrictions on advertising to minors.
high temporal
Description of regulatory obligations placed on online platforms by the EU's Digital Services Act.
The European Union's Digital Markets Act is designed to promote online competition among platforms.
high temporal
Purpose and policy goal of the EU's Digital Markets Act.
Technology companies can be subject to substantial financial penalties for violating the European Union's Digital Services Act or Digital Markets Act.
high temporal
Enforcement mechanism for compliance with EU digital platform laws.
Apple has maintained that restrictions in its App Store are intended to protect users from privacy breaches, malware (viruses), and financial scams.
high temporal
Stated rationale by Apple for maintaining App Store restrictions.
Autonomous patrol vehicles can integrate in real time with police databases, license plate readers, and crime analytics software to support law enforcement operations.
high descriptive
Describes typical data-integration capabilities of autonomous policing platforms.
360-degree cameras combined with thermal imaging sensors enable the detection and identification of people or vehicles in restricted areas and under low-light conditions.
high descriptive
Describes common sensor capabilities used for round-the-clock situational awareness.
Autonomous ground vehicles can deploy drones equipped with thermal cameras to extend aerial surveillance, monitor larger areas, and assist during active incidents.
high descriptive
Describes a combined unmanned system architecture linking ground platforms and aerial drones.
Autonomous patrol vehicles are described as a potential 'force multiplier' that can automate routine patrols, increase situational awareness, and free human officers to focus on complex interactions, while raising issues around privacy, data collection, transparency, accountability, and long-term costs.
high descriptive
Summarizes common claimed benefits and concerns associated with deploying autonomous policing technologies.
A Collaborative Combat Aircraft (CCA) is an unmanned combat aircraft powered by artificial intelligence and designed to operate together with manned fighter jets as autonomous wingmen.
high definition
Defines the CCA concept and role relative to manned fighters.
Some Collaborative Combat Aircraft designs incorporate human supervisory controls, including mission-abort mechanisms (often called 'kill switches') and explicit requirements for a human operator to approve lethal actions.
high design_practice
Describes common human-in-the-loop safety and control features applied to unmanned combat aircraft designs.
Developers of Collaborative Combat Aircraft may pursue mass-producible designs that use commercial off-the-shelf engines and parts manufacturable by many suppliers to reduce production complexity and cost compared with traditional fighter jets.
high design_practice
Describes a manufacturing and cost-reduction approach for unmanned combat aircraft programs.
Conduent is a technology vendor that manages technology and payment systems for dozens of U.S. state governments and provides services for state-level programs such as Medicaid, child support, food assistance, and toll systems.
high organizational
Description of the company's role and the types of state programs it supports
Conduent processes roughly $85 billion in disbursements annually on behalf of government clients.
high statistical
Scale of Conduent's financial processing operations
Conduent handles over 2 billion customer service interactions per year and supports approximately 100 million residents through government health and welfare programs.
high statistical
Scale of Conduent's customer interaction volume and population reach
Stocks linked to artificial intelligence often exhibit volatile price movements driven by investor sentiment, while underlying industry fundamentals can remain strong.
high general
Describes a common market dynamic where emotion-driven trading in AI-related stocks can produce pronounced short-term volatility despite solid sector fundamentals.
Earnings reports and other corporate announcements from major chipmakers such as Nvidia frequently act as catalysts that can influence technology sector stock prices.
high general
Major semiconductor companies' financial results or guidance are commonly viewed by market participants as material information that can move tech stocks.
A study by the nonprofit child advocacy group Thorn reported that 1 in 17 children in the United States have been victims of A.I. deepfake pornography.
medium statistical
Statistic summarizes reported prevalence of A.I. deepfake pornography victimization among children; article did not specify the study year.
Cybercriminals commonly disguise malicious downloads as free activation guides for popular software (for example: Windows, Microsoft 365, Photoshop, Netflix, and Spotify) on social media platforms such as TikTok.
high general
Describes a recurring tactic used by attackers to distribute malware via social platforms.
A ClickFix attack is a social-engineering technique that instructs users to run a short command (often a PowerShell command) to quickly 'activate' or 'fix' software, with the goal of tricking users into executing commands that deliver malware.
high general
Definition of a social-engineering technique used to induce command execution by victims.
Info-stealer malware (for example, malware families such as Aura Stealer) is designed to siphon saved passwords, browser cookies, cryptocurrency wallets, and authentication tokens from infected computers.
high general
Describes typical capabilities of information-stealing malware.
Malicious payloads can abuse Microsoft's C# compiler to compile and execute code directly in memory (in-memory execution), a technique that makes detection by traditional file-based antivirus scanners more difficult; general mitigations include avoiding running untrusted commands, downloading software only from official sources or legitimate app stores, keeping software and security tools updated, using strong antivirus with real-time scanning, and using dark-web monitoring or data-removal services (which can alert to exposures but cannot guarantee complete removal of leaked personal data).
high general
Describes an execution/evasion technique and consolidated defensive best practices.
Anthropic's AI models (Claude) are described as capable of reasoning and making decisions and are being applied to customer service, analysis of complex medical research, and software development.
high descriptive
Capabilities and application domains of Anthropic's AI models.