It’s almost impossible to dislike LG’s Dual Inverter window air conditioner (or if you want to call it by its official name, the LW1019IVSM), especially with temperatures topping 95 degrees in New York City this summer. It cools down my bedroom incredibly fast, and it doesn’t get egregiously loud. It’s smart too. Before entering my apartment building, I remotely turn it on so I can walk straight into some semblance of bliss.
OK, fine. I did have to go to the hardware store to install it, partly because my windows are weird. But once I got the heavy thing mounted, it blasted my woes away with icy gusts.
The Ice Man Cometh
It took me two days to install this AC unit. It’s my fault that I decided to mount it at 11 pm, but it’s not my fault that I realized soon after that I wouldn’t succeed without some equipment. The specifications call for double-hung windows of a certain dimension, which I have. But the center of the sill is like a deep canyon. It’s not flush with the inner sill, which meant there was no easy way to mount the included L brackets to keep it secure.
The fact that the instruction booklet has an included solution suggests that this is not an uncommon problem. So the next morning, I grabbed some plywood from a nearby Home Depot, sawed it to the right shape, and plopped it in to create an even surface. I screwed the L brackets onto the plywood and lifted the 63-pound thing into place with the help of my partner.
The rest was relatively easy. The manual is fairly detailed and easy to follow; just be prepared for possible window shenanigans. If you find yourself at the local hardware store, you can also pick up a metal bracket instead. The AC will sit much more securely (it also might be a requirement in cities like New York), and you won’t have to drill any holes.
LG includes a good amount of foam and padding to seal any gaps around the AC unit, though don’t expect the same level of sound blockage as when your window was closed. Noise will leak in (thanks, New York). It doesn’t beat the soundproofing I experienced with Midea’s U-Shaped air conditioner, where the window goes down much further, creating a smaller gap.
What exactly does “Dual Inverter” mean? A regular AC uses a single, motor-driven compressor to remove hot air from a room, cool air in a chamber, and pass heat outside. Usually, the AC senses when the room is too hot and the compressor ratchets up to the max level, then shuts down when the room hits the right temperature.
However, the LG AC has twin compressors and chambers, which allow it to tweak its output. If a room requires only a certain amount of energy to cool a room, the motor doesn’t need to ratchet all the way up to work. That makes the unit faster, quieter, and more energy-efficient. According to Energy Star, this unit uses 27 percent less energy than the US federal standard, and it did successfully maintain temps once it cooled the room.
What you probably want to know most is how well it cools, to save you from the next heat wave. The answer is very, very well. On a particularly sweltering 94-degree day, this LG unit brought the indoor temperature from around 84 degrees to a comfortable 73 within an hour.
I do have to note that LG sent me the 9,500-Btu (British thermal units) model, which is rated to cool a room of around 400 to 450 square feet. Unfortunately, I was only able to test it in a room about half that size—though when I left my door open, it cooled my hallway and bathroom reasonably well too. (You can see what Btu rating you need based on your room size here.)
The fan is strong. You can choose between three fan strengths, and I usually left it at the lowest setting. On the highest, I could feel the icy wind on my face more than 12 feet away. You can also adjust the louvers up, instead of just side to side, to direct air away from your face. That’s helpful, especially if you end up spending a lot of time in front of the AC because of your room configuration.
Thankfully, it doesn’t get loud. My bedroom normally hovers around 42 decibels, which is a little quieter than a library. With the LG AC on, it revs up to 54 decibels, which is about as loud as a conversation at home. This isn’t as quiet as the aforementioned Midea AC, but it’s significantly quieter than most other air conditioners I’ve tried, all of which blared around 70 decibels.
That said, I would avoid using the Energy Saver mode at night. It cools the room down to the desired temperature and then shuts off. It’ll come back to life every so often to cool the room down again, but the sound it makes when it does this has woken up my partner and me a few times. It sounds like the propeller of an airplane starting to spin, or an old car failing to start. Not fun at 4 am. Instead, we relied on the timer and set it to turn off a few hours into our sleep.
Hell Freezes Over
The only thing you’ll need to regularly do to maintain this AC is clean the filter. The light will turn on automatically after 250 hours of operation (around 10 days). You can just pop it out, wash it, let it dry, and put it back in. Just remember to press and hold the up and down temperature buttons to turn off the light, because the AC doesn’t automatically know if you’ve cleaned it.
You can use the included remote or LG’s app to control it, but I preferred using my voice with my Google Assistant speakers (or through the Google Home app). It supports Amazon’s Alexa, but not HomeKit. Over the three months I’ve been testing this AC, it never once had a funky connection with the app or needed me to reconnect it to Google, which is usually standard practice with smart home devices. That means I’ve reliably been able to turn it on before I get home, so I can enter an icebox.
Look, I know air conditioners aren’t eco-friendly, and if you’d prefer to skip it, you can check out our guide on how to stay cool without AC. But as the Earth continues to heat up, they can help us feel more comfortable during the hottest months, and in some places, cooling devices save lives. Dual inverter tech does make this unit more efficient compared to other ACs with similar cooling capacities. That makes me feel a little better when I stand in front of it, after wading through New York’s heavy summer humidity.
There’s bipartisan agreement in Washington that the US government should do more to support development of artificial intelligence technology. The Trump administration redirected research funding towards AI programs; President Biden’s science advisor Eric Lander said of AI last month that “America’s economic prosperity hinges on foundational investments in our technological leadership.”
At the same time, parts of the US government are working to place limits on algorithms to prevent discrimination, injustice, or waste. The White House, lawmakers from both parties, and federal agencies including the Department of Defense and the National Institute for Standards and Technology are all working on bills or projects to constrain potential downsides of AI.
Biden’s Office of Science and Technology Policy is working on addressing the risks of discrimination caused by algorithms. The National Defense Authorization Act passed in January introduced new support for AI projects, including a new White House office to coordinate AI research, but also required the Pentagon to assess the ethical dimensions of AI technology it acquires, and NIST to develop standards to keep the technology in check.
In the past three weeks, the Government Accountability Office, which audits US government spending and management and is known as Congress’s watchdog, released two reports warning that federal law enforcement agencies aren’t properly monitoring the use and potential errors of algorithms used in criminal investigations. One took aim at face recognition, the other at forensic algorithms for face, fingerprint, and DNA analysis; both were prompted by lawmaker requests to examine potential problems with the technology. A third GAO report laid out guidelines for responsible use of AI in government projects.
Helen Toner, director of strategy at Georgetown’s Center for Security and Emerging Technology, says the bustle of AI activity provides a case study of what happens when Washington wakes up to new technology.
In the mid-2010s, lawmakers didn’t pay much notice as researchers and tech companies brought about a rapid increase in the capabilities and use of AI, from conquering champs at Go to ushering smart speakers into kitchens and bedrooms. The technology became a mascot for US innovation, and a talking point for some tech-centric lawmakers. Now the conversations have become more balanced and business-like, Toner says. “As this technology is being used in the real world you get problems that you need policy and government responses to.”
Face recognition, the subject of GAO’s first AI report of the summer, has drawn special focus from lawmakers and federal bureaucrats. Nearly two dozen US cities have banned local government use of the technology, usually citing concerns about accuracy, which studies have shown is often worse on people with darker skin.
The GAO’s report on the technology was requested by six Democratic representatives and senators, including the chairs of the House oversight and judiciary committees. It found that 20 federal agencies that employ law enforcement officers use the technology, with some using it to identify people suspected of crimes during the January 6 assault on the US Capitol, or the protests after the killing of George Floyd by Minneapolis police in 2020.
Fourteen agencies sourced their face recognition technology from outside the federal government—but 13 did not track what systems their employees used. The GAO advised agencies to keep closer tabs on face recognition systems to avoid the potential for discrimination or privacy invasion.
The GAO report appears to have increased the chances of bipartisan legislation constraining government use of face recognition. At a hearing of the House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security held Tuesday to chew over the GAO report, Representative Sheila Jackson Lee (D-Texas), the subcommittee chair, said that she believed it underscored the need for regulations. The technology is currently unconstrained by federal legislation. Ranking member Representative Andy Biggs (R-Arizona) agreed. “I have enormous concerns, the technology is problematic and inconsistent,” he said. “If we’re talking about finding some kind of meaningful regulation and oversight of facial recognition technology then I think we can find a lot of common ground.”
GAO dug deeper on law enforcement technology in its report on forensic algorithms, requested by bipartisan members of the House, including the chairs of the oversight and science, space, and technology committees. The agency said that algorithms for face recognition, latent fingerprint analysis, and DNA profiling from degraded or mixed samples can help investigators. But the report also suggested lawmakers support new standards on training and appropriate use of such algorithms to avoid errors and increase transparency in criminal justice.
Representative Mark Takano (D-California), among the lawmakers who requested the report, says it provided a window into the fallibility of forensic algorithms. “Everything from the data input, to the design of the algorithm, to the testing of it can lead to disparate outcomes for people in the real world,” he says. In April, Takano reintroduced a bill drafted in 2019 that would direct NIST to establish standards and a testing program for forensic algorithms, and prohibit use of trade secrets to prevent criminal defense teams from accessing source code of algorithms used to process evidence.
The GAO’s third AI-related report, specifying guidelines on responsible use of AI for federal agencies, was initiated by the agency itself in anticipation of rapid growth in government AI projects.
GAO chief data scientist Taka Ariga says the report aims to explain to government agencies and AI suppliers in the private sector the acceptable standards for the testing, security, and privacy of AI systems and data used to create them. Future audits of government AI projects will draw on the document’s criteria, he says. “We want to make sure we’re asking the accountability questions now because our job is going to get more difficult when we encounter AI systems that are more capable,” Ariga says.
Despite the recent efforts of lawmakers and officials like Ariga, some policy experts say the US agencies and Congress still need to invest more in adapting to the age of AI.
In a recent report, Georgetown’s CSET outlined scary but plausible “AI accidents” to encourage lawmakers to work more urgently on AI safety research and standards. Its hypothetical disasters included a skin cancer app misdiagnosing Black people at higher rates, leading to unnecessary deaths, or mapping apps steering drivers into the path of wildfires.
The Brookings Institution’s director of governance studies, Darrell West, recently called for the revival of the Office of Technology Assessment, shut down 25 years ago, to provide lawmakers with independent research on new technologies such as AI.
Members of congress from both parties have attempted to bring back the OTA in recent years. They include Takano, who says it could help Congress be more proactive in tackling challenges raised by algorithms. “We need OTA or something like it to help members anticipate where technology is going to challenge democratic institutions, or the justice system, or political stability,” he says.
The Russian state hackers who orchestrated the SolarWinds supply chain attack last year exploited an iOS zero-day as part of a separate malicious email campaign aimed at stealing Web authentication credentials from Western European governments, according to Google and Microsoft.
In a post Google published on Wednesday, researchers Maddie Stone and Clement Lecigne said a “likely Russian government-backed actor” exploited the then unknown vulnerability by sending messages to government officials over LinkedIn.
Moscow, Western Europe, and USAID
Attacks targeting CVE-2021-1879, as the zero-day is tracked, redirected users to domains that installed malicious payloads on fully updated iPhones. The attacks coincided with a campaign by the same hackers who delivered malware to Windows users, the researchers said.
The campaign closely tracks to one Microsoft disclosed in May. In that instance, Microsoft said that Nobelium—the name Microsoft uses to identify the hackers behind the SolarWinds supply chain attack—first managed to compromise an account belonging to USAID, a US government agency that administers civilian foreign aid and development assistance. With control of the agency’s account with the online marketing company Constant Contact, the hackers had the ability to send emails that appeared to use addresses known to belong to the US agency.
The federal government has attributed last year’s supply chain attack to hackers working for Russia’s Foreign Intelligence Service (abbreviated as SVR). For more than a decade, the SVR has conducted malware campaigns targeting governments, political think tanks, and other organizations in countries including Germany, Uzbekistan, South Korea, and the US. Targets have included the US State Department and the White House in 2014. Other names used to identify the group include APT29, the Dukes, and Cozy Bear.
In an email, the head of Google’s Threat Analysis Group, Shane Huntley, confirmed the connection between the attacks involving USAID and the iOS zero-day, which resided in the WebKit browser engine.
“These are two different campaigns, but based on our visibility, we consider the actors behind the WebKit 0-day and the USAID campaign to be the same group of actors,” Huntley wrote. “It is important to note that everyone draws actor boundaries differently. In this particular case, we are aligned with the US and UK government’s assessment of APT 29.”
Forget the Sandbox
Throughout the campaign, Microsoft said, Nobelium experimented with multiple attack variations. In one wave, a Nobelium-controlled web server profiled devices that visited it to determine what OS and hardware the devices ran on. In the event the targeted device was an iPhone or iPad, a server delivered an exploit for CVE-2021-1879, which allowed hackers to deliver a universal cross-site scripting attack. Apple patched the zero-day in late March.
In Wednesday’s post, Stone and Lecigne wrote:
After several validation checks to ensure the device being exploited was a real device, the final payload would be served to exploit CVE-2021-1879. This exploit would turn offSame-Origin-Policyprotections in order to collect authentication cookies from several popular websites, including Google, Microsoft, LinkedIn, Facebook, and Yahoo and send them via WebSocket to an attacker-controlled IP. The victim would need to have a session open on these websites from Safari for cookies to be successfully exfiltrated. There was no sandbox escape or implant delivered via this exploit. The exploit targeted iOS versions 12.4 through 13.7. This type of attack, described by Amy Burnett inForget the Sandbox Escape: Abusing Browsers From Code Execution, are mitigated in browsers withSite Isolationenabled such as Chrome or Firefox.
It’s Raining Zero-Days
The iOS attacks are part of a recent explosion in the use of zero-days. In the first half of this year, Google’s Project Zero vulnerability-research group has recorded 33 zero-day exploits used in attacks—11 more than the total number from 2020. The growth has several causes, including better detection by defenders and better software defenses that, in turn, require multiple exploits to break through.
The other big driver is the increased supply of zero-days from private companies selling exploits.
“Zero-day capabilities used to be only the tools of select nation-states who had the technical expertise to find 0-day vulnerabilities, develop them into exploits, and then strategically operationalize their use,” the Google researchers wrote. “In the mid-to-late 2010s, more private companies have joined the marketplace selling these 0-day capabilities. No longer do groups need to have the technical expertise, now they just need resources.”
The iOS vulnerability was one of four in-the-wild zero-days Google detailed on Wednesday. The other three were:
The four exploits were used in three different campaigns. Based on their analysis, the researchers believe that three of the exploits were developed by the same commercial surveillance company, which sold them to two different government-backed actors. The researchers didn’t identify the surveillance company, the governments, or the specific three zero-days they were referring to.
Representatives from Apple didn’t immediately respond to a request for comment.
KiwiSDR is hardware that uses a software-defined radio to monitor transmissions in a local area and stream them over the Internet. A largely hobbyist base of users does all kinds of cool things with the playing-card-sized devices. For instance, a user in Manhattan could connect one to the Internet so that people in Madrid, Spain, or Sydney, Australia, could listen to AM radio broadcasts, CB radio conversations, or even watch lightning storms in Manhattan.
On Wednesday, users learned that for years, their devices had been equipped with a backdoor that allowed the KiwiSDR creator—and possibly others—to log in to the devices with administrative system rights. The remote admin could then make configuration changes and access data not just for the KiwiSDR but in many cases to the Raspberry Pi, BeagleBone Black, or other computing devices the SDR hardware is connected to.
A big trust problem
Signs of the backdoor in the KiwiSDR date back to at least 2017. The backdoor was recently removed with no mention of the removal under unclear circumstances. But despite the removal, users remain rattled since the devices run as root on whatever computing device they’re connected to and can often access other devices on the same network.
“It’s a big trust problem,” a user with the handle xssfox told me. “I was completely unaware that there was a backdoor, and it’s hugely disappointing to see the developer adding backdoors in and actively using them without consent.”
Xssfox said she runs two KiwiSDR devices, one on a BeagleBone Black that uses a custom FPGA to run the Pride Radio Group, which lets people listen to radio transmissions in and around Gladstone, Australia. A page of public broadcasts shows that roughly 600 other devices are also connected to the Internet.
In my case, the KiwiSDRs are hosted on a remote site that has other radio experiments running. They could have gained access to those. Other KiwiSDR users sometimes have them set up in remote locations using other people’s/companies’ networks, or on their home network. It’s sort of like the security camera backdoors/exploits, but smaller-scale [and] just amateur radio people.
Software-defined radios use software—rather than the standard hardware found in traditional radio equipment—to process radio signals. The KiwiSDR attaches to an embedded computer, which in turn shares local signals with a much wider base of people.
The backdoor is simple enough. A few lines of code allow the developer to remotely access any device by entering its URL in a browser and appending a password to the end of the address. From there, the person using the backdoor can make configuration changes not only to the radio device but, by default, also to the underlying computing device it runs on. Here’s a video of xssfox using the backdoor on her device and getting root access to her BeagleBone.
Quick video showing how the backdoor on the kiwisdr works.
I’ve also tested that touch /root/kiwi.config/opt.no_console mitigates the issue
“It looks like the SDR… plugs into a BeagleBone Arm Linux board,” HD Moore, a security expert and CEO of network discovery platform Rumble, told me. “This shell is on that Linux board. Compromising it may get you into the user’s network.”
The backdoor lives on
Xssfox said that access to the underlying computing device—and possibly other devices on the same network—happens as long as a setting called “console access” is turned on, as it is by default. Turning the access off requires a change to either the admin interface or a configuration file, which many users are unlikely to have made. Additionally, many devices are updated rarely, if ever. So even though the KiwiSDR developer has removed the offending code, the backdoor will live on in devices, making them vulnerable to takeover.
Software submissions and technical documents like this one name the developer of KiwiSDR as John Seamons. Seamons didn’t respond to an email seeking comment for this post.
The user forums were unavailable at the time of publication. Screenshots here and here, however, appear to show Seamons admitting to the backdoor as long ago as 2017.
Another troubling aspect to the backdoor is that, as noted by engineer user Mark Jessop, it communicated over an HTTP connection, exposing the plaintext password and data over the backdoored network to anyone who could monitor the traffic coming into or out of the device.
However, given the KiwiSDR is HTTP only, sending what is essentially a ‘master’ password in the clear is a little worrying. KiwiSDR does not support HTTPS, and it’s been stated that it will never support it. (Dealing with certs on it would be a PITA too)
KiwiSDR users who want to check if their devices have been remotely accessed can do so by running the command
zgrep -- "PWD admin" /var/log/messages*
There’s no indication that anyone has used the backdoor to do malicious things, but the very existence of this code and its apparent use over the years to access user devices without permission is itself a security breach—and a disturbing one at that. At a minimum, users should inspect their devices and networks for signs of compromise and upgrade to v1.461. The truly paranoid should consider unplugging their devices until more details become available.
Sick of waiting for your favorite PC games to come to Switch? Valve today unveiled its rumored Steam Deck, a handheld machine for PC gaming. It’s due out December 2021 and has models priced at $399, $529 and $649—scaled by storage and processing speed.
The Steam Deck’s big pitch is on-the-go, faithful PC gaming, but it’s taking an even bigger swing: It’s basically a handheld PC. Users can install and operate PC software on it, like a web browser, other game stores— including the Epic Games Store—and video-streaming services. It can be connected to a monitor and other gaming peripherals, such as a keyboard and mouse (a dock contains ports for these, plus Ethernet). And thanks to its Cloud saving feature, players can easily pick up game save files between their Steam Deck and PC. A user’s existing Steam library will live on it, basically immediately upon logging in, because it operates off a modified SteamOS. So not only can players download games like Doom Eternal, Death Stranding, or Hades, they can chat with Steam friends and trawl through Steam’s community fora.
“We don’t think people should be locked into a certain direction or a certain set of software that they can install,” Valve designer Lawrence Yang said to IGN in an interview. “If you buy a Steam Deck, it’s a PC. You can install whatever you want on it, you can attach any peripherals you want to it. Maybe a better way to think about it is that it’s a small PC with a controller attached, as opposed to a gaming console.”
The Steam Deck looks exactly how we’ve come to expect of modern portable gaming devices: a rectangle with two thumbsticks and a 7-inch LCD touchscreen—though almost an inch larger than the Nintendo Switch’s. Its resolution is 1280 x 800 pixels, and its display has a 60-Hz refresh rate. Unlike the Switch, the Steam Deck has two square trackpads for “PC games that were never designed to be handheld,” Valve writes in its announcement. That trackpad, plus the console’s gyro capabilities, should help with aiming, shooting, and moving games’ cameras around. Despite trackpads’ bad rep, Valve says the Steam Deck’s will have 55 percent better latency compared to the Steam Controller.
A little surprisingly, the Steam Deck also features four rear buttons on the back of the console—something Xbox Elite controller obsessives have been proselytizing for years. They will function much like a keyboard and are totally customizable depending on users’ keybinding preferences.
The Steam Deck comes with 64, 256, or 512 GB of storage, depending on the model, plus microSD card slots—helpful for installing enormous games optimized for PC, like Death Stranding, which may not even fit on that 64-GB base model. It’s got 16 GBs of RAM and advertises a battery life of between seven and eight hours. After a hands-on with the device, IGN’s reviewer said it stayed “comfortably cool” throughout all games, including ones like Star Wars: Jedi Fallen Order. IGN compares its 2-teraflop AMD Accelerated Processing Unit power (a combined CPU and graphics card, this one custom-built by AMD for Valve) to that of an Xbox One or PlayStation 4. Its reviewer said the Steam Deck is able to smoothly run current PC games at medium or high settings because of its 720p resolution target. (Strangely, maybe even disappointingly, not 1080p.)
Valve has a history of releasing ahead-of-the-curve hardware that doesn’t quite work out. Its 2015 Steam Machine was a prebuilt, Linux-powered gaming computer that ran Steam and Steam games. It flopped. Three years later, Valve quietly removed references to the hardware. Reviewers condemned its poor technical performance and wonky software. Also in 2015, Valve released its significantly more beloved Steam Link, which would wirelessly facilitate PC gaming on television monitors and mobile devices. Valve stopped supporting it in 2018. Valve has also invested heavily in VR headsets, which are well reviewed but not very popular.
The Steam Deck does seem a little too good to be true, but for now, I will be a willing participant in its vision for portable PC gaming. PC gaming anywhere and everywhere is the future. So is connecting the anywhere to the everywhere.
If you’re a member of the US military who’s gotten friendly Facebook messages from private-sector recruiters for months on end, suggesting a lucrative future in the aerospace or defense contractor industry, Facebook may have some bad news.
On Thursday, the social media giant revealed that it has tracked and at least partially disrupted a long-running Iranian hacking campaign that used Facebook accounts to pose as recruiters, reeling in US targets with convincing social engineering schemes before sending them malware-infected files or tricking them into submitting sensitive credentials to phishing sites. Facebook says that the hackers also pretended to work in the hospitality or medical industries, in journalism, or at NGOs or airlines, sometimes engaging their targets for months with profiles across several different social media platforms. And unlike some previous cases of Iranian state-sponsored social media catfishing that have focused on Iran’s neighbors, this latest campaign appears to have largely targeted Americans, and to a lesser extent UK and European victims.
Facebook says it has removed “fewer than 200” fake profiles from its platforms as a result of the investigation and notified roughly the same number of Facebook users that hackers had targeted them. “Our investigation found that Facebook was a portion of a much broader espionage operation that targeted people with phishing, social engineering, spoofed websites, and malicious domains across multiple social media platforms, email, and collaboration sites,” David Agranovich, Facebook’s director for threat disruption, said Thursday in a call with press.
Facebook has identified the hackers behind the social engineering campaign as the group known as Tortoiseshell, believed to work on behalf of the Iranian government. The group, which has some loose ties and similarities to other better-known Iranian groups known by the names APT34 or Helix Kitten and APT35 or Charming Kitten, first came to light in 2019. At that time, security firm Symantec spotted the hackers breaching Saudi Arabian IT providers in an apparent supply chain attack designed to infect the company’s customers with a piece of malware known as Syskit. Facebook has spotted that same malware used in this latest hacking campaign, but with a far broader set of infection techniques and with targets in the US and other Western countries instead of the Middle East.
Tortoiseshell also seems to have opted from the start for social engineering over a supply-chain attack, starting its social media catfishing as early as 2018, according to security firm Mandiant. That includes far more than just Facebook, says Mandiant vice president of threat intelligence John Hultquist. “From some of the very earliest operations, they compensate for really simplistic technical approaches with really complex social media schemes, which is an area where Iran is really adept,” Hultquist says.
In 2019, Cisco’s Talos security division spotted Tortoiseshell running a fake veterans’ site called Hire Military Heroes, designed to trick victims into installing a desktop app on their PC that contained malware. Craig Williams, a director of Talos’ intelligence group, says that fake site and the larger campaign Facebook has identified both show how military personnel trying to find private-sector jobs pose a ripe target for spies. “The problem we have is that veterans transitioning over to the commercial world is a huge industry,” says Williams. “Bad guys can find people who will make mistakes, who will click on things they shouldn’t, who are attracted to certain propositions.”
Facebook warns that the group also spoofed a US Department of Labor site; the company provided a list of the group’s fake domains that impersonated news media sites, versions of YouTube and LiveLeak, and many different variations on Trump family and Trump organization–related URLs.
The threat of Iran-based hacking operations—and particularly, the threat of disruptive cyberattacks from the country—may have appeared to subside as the Biden Administration has reversed course from the Trump administration’s confrontational approach. The 2020 assassination of Iranian military leader Qassem Soleimani in particular led to an uptick in Iranian intrusions that many feared were a precursor to retaliatory cyberattacks that never materialized. President Biden has, by contrast, signaled that he hopes to revive the Obama-era deal that suspended Iran’s nuclear ambitions and eased tensions with the country—a rapprochement that has been rattled by news that Iranian intelligence agents plotted to kidnap an Iranian-American journalist.
But the Facebook campaign shows that Iranian espionage will continue to target the US and its allies, even as the broader political relations improve. “The IRGC are clearly conducting their espionage in the United States,” says Mandiant’s Hultquist. “They’re still up to no good, and they need to be carefully watched.”
In the first six months of 2021, TSMC increased its output of micro-controlling units, an important component used for car electronics, by 30 per cent compared with the same period last year, the world’s largest contract chipmaker told investors on an earnings call on Thursday. MCU production is expected to be 60 percent higher for the full year than in 2020, it added.
“By taking such actions, we expect the shortage to be greatly reduced for TSMC customers starting this quarter,” said CC Wei, TSMC’s chief executive.
TSMC’s announcement follows more than nine months of severe shortages of chips, which disrupted global automotive production. The crisis began after carmakers pulled chip orders last fall, leaving them without supplies when demand suddenly surged weeks later.
Analysts have recently raised their outlook for automotive chip supplies.
IHS Markit said in a note in late June that it expected the disruption to recede in the third quarter. “We expect an improvement over the first or second quarter because the situation is becoming better understood and great efforts are being made to enhance visibility within a very complex supply chain,” it wrote.
“We see evidence of this in some of the more relaxed announcements coming from General Motors starting back operations earlier than initially planned and Toyota’s ongoing commitment to its planning.”
Analysts at JPMorgan estimated that production cuts by global carmakers related to the semiconductor shortage would fall to 399,000 vehicles in the third quarter compared with 1.9 million during the second quarter.
In a move set to also boost confidence in longer-term supply security, TSMC said it was ready to keep investing in mature production technology, which auto chip supplies mainly rely on.
“Our strategy more recently in mature nodes is to work more closely with our customers to create specialty solutions; we expect that this structural demand will continue,” said Mark Liu, TSMC’s chair. “We will focus our investment on specialty. For manufacturing greenfield expansion, we don’t rule it out, as long as demand can justify it.”
United Microelectronics Corporation, TSMC’s smaller Taiwanese rival, earlier this year announced a significant expansion of its manufacturing capacity at 28 nanometers, one of the most important nodes for car chip production.
TSMC’s willingness to reinvest in older technologies, a departure from its past practice, is part of a broader strategic adjustment. Liu also announced that the company was ready to invest in more new fabrication plants, or fabs, in countries other than Taiwan.
“There are several projects still under planning,” Liu said, adding that investment in any of those would come on top of the $100 billion in capital spending TSMC has earmarked for the next three years.
The company said it would not rule out expanding its manufacturing base in Arizona beyond the $12 billion fab due to start production in 2024. TSMC also announced that it is doing due diligence on a proposal to build a specialty semiconductor fab in Japan, a country it had previously only considered for research and development.
Liu said that while TSMC would continue its policy of starting cutting-edge technology production in Taiwan and keep R&D there, the need for semiconductor infrastructure security made a more diverse manufacturing footprint necessary “to sustain and enhance our competitive advantage and better serve our customers in the new geopolitical environment.”
TSMC on Thursday reported net profits of NT$134.4 billion (US$4.8 billion) for the second quarter, an 11.2 per cent year-on-year increase. It forecast that revenue would rise 21 per cent to 23 per cent in the third quarter, a slight acceleration from the second quarter.
There’s a moment in any foray into new technological territory when you realize you may have embarked on a Sisyphean task. Staring at the multitude of options available to take on the project, you research your options, read the documentation, and start to work—only to find that actually just defining the problem may be more work than finding the actual solution.
Reader, this is where I found myself two weeks into this adventure in machine learning. I familiarized myself with the data, the tools, and the known approaches to problems with this kind of data, and I tried several approaches to solving what on the surface seemed to be a simple machine-learning problem: based on past performance, could we predict whether any given Ars headline will be a winner in an A/B test?
Things have not been going particularly well. In fact, as I finished this piece, my most recent attempt showed that our algorithm was about as accurate as a coin flip.
But at least that was a start. And in the process of getting there, I learned a great deal about the data cleansing and pre-processing that goes into any machine-learning project.
Prepping the battlefield
Our data source is a log of the outcomes from 5,500-plus headline A/B tests over the past five years—that’s about as long as Ars has been doing this sort of headline shootout for each story that gets posted. Since we have labels for all this data (that is, we know whether it won or lost its A/B test), this would appear to be a supervised learning problem. All I really needed to do to prepare the data was to make sure it was properly formatted for the model I chose to use to create our algorithm.
I am not a data scientist, so I wasn’t going to be building my own model any time this decade. Luckily, AWS provides a number of pre-built models suitable to the task of processing text and designed specifically to work within the confines of the Amazon cloud. There are also third-party models, such as Hugging Face, that can be used within the SageMaker universe. Each model seems to need data fed to it in a particular way.
The choice of the model in this case comes down largely to the approach we’ll take to the problem. Initially, I saw two possible approaches to training an algorithm to get a probability of any given headline’s success:
Binary classification: We simply determine what the probability is of the headline falling into the “win” or “lose” column based on previous winners and losers. We can compare the probability of two headlines and pick the strongest candidate.
Multiple category classification: We attempt to rate the headlines based on their click-rate into multiple categories—ranking them 1 to 5 stars, for example. We could then compare the scores of headline candidates.
The second approach is much more difficult, and there’s one overarching concern with either of these methods that makes the second even less tenable: 5,500 tests, with 11,000 headlines, is not a lot of data to work with in the grand AI/ML scheme of things.
So I opted for binary classification for my first attempt, because it seemed the most likely to succeed. It also meant the only data point I needed for each headline (besides the headline itself) is whether it won or lost the A/B test. I took my source data and reformatted it into a comma-separated value file with two columns: titles in one, and “yes” or “no” in the other. I also used a script to remove all the HTML markup from headlines (mostly a few HTML tags for italics). With the data cut down almost all the way to essentials, I uploaded it into SageMaker Studio so I could use Python tools for the rest of the preparation.
Next, I needed to choose the model type and prepare the data. Again, much of data preparation depends on the model type the data will be fed into. Different types of natural language processing models (and problems) require different levels of data preparation.
After that comes “tokenization.” AWS tech evangelist Julien Simon explains it thusly: “Data processing first needs to replace words with tokens, individual tokens.” A token is a machine-readable number that stands in for a string of characters. “So ‘ransomware’ would be word one,” he said, “‘crooks’ would be word two, ‘setup’ would be word three… so a sentence then becomes a sequence of tokens, and you can feed that to a deep-learning model and let it learn which ones are the good ones, which ones are the bad ones.”
Depending on the particular problem, you may want to jettison some of the data. For example, if we were trying to do something like sentiment analysis (that is, determining if a given Ars headline was positive or negative in tone) or grouping headlines by what they were about, I would probably want to trim down the data to the most relevant content by removing “stop words”—common words that are important for grammatical structure but don’t tell you what the text is actually saying (like most articles).
However, in this case, the stop words were potentially important parts of the data—after all, we’re looking for structures of headlines that attract attention. So I opted to keep all the words. And in my first attempt at training, I decided to use BlazingText, a text processing model that AWS demonstrates in a similar classification problem to the one we’re attempting. BlazingText requires the “label” data—the data that calls out a particular bit of text’s classification—to be prefaced with “__label__“. And instead of a comma-delimited file, the label data and the text to be processed are put in a single line in a text file, like so:
Another part of data preprocessing for supervised training ML is splitting the data into two sets: one for training the algorithm, and one for validation of its results. The training data set is usually the larger set. Validation data generally is created from around 10 to 20 percent of the total data.
There has been a great deal of research into what is actually the right amount of validation data—some of that research suggests that the sweet spot relates more to the number of parameters in the model being used to create the algorithm rather than the overall size of the data. In this case, given that there was relatively little data to be processed by the model, I figured my validation data would be 10 percent.
In some cases, you might want to hold back another small pool of data to test the algorithm after it’s validated. But our plan here is to eventually use live Ars headlines to test, so I skipped that step.
To do my final data preparation, I used a Jupyter notebook—an interactive web interface to a Python instance—to turn my two-column CSV into a data structure and process it. Python has some decent data manipulation and data science-specific toolkits that make these tasks fairly straightforward, and I used two in particular here:
pandas, a popular data analysis and manipulation module that does wonders slicing and dicing CSV files and other common data formats.
sklearn (or scikit-learn), a data science module that takes a lot of the heavy lifting out of machine-learning data preprocessing.
nltk, the Natural Language Toolkit—and specifically, the Punkt sentence tokenizer for processing the text of our headlines.
Here’s a chunk of the code in the notebook that I used to create my training and validation sets from our CSV data:
I started by using pandas to import the data structure from the CSV created from the initially cleaned and formatted data, calling the resulting object “dataset.” Using the dataset.head() command gave me a look at the headers for each column that had been brought in from the CSV, along with a peek at some of the data.
The pandas module allowed me to bulk-add the string “__label__” to all the values in the label column as required by BlazingText, and I used a lambda function to process the headlines and force all the words to lower case. Finally, I used the sklearn module to split the data into the two files I would feed to BlazingText.
I’m a tall, nerdy, white man in my late twenties who obsesses over tube mics and Japanese-made Les Paul guitars; I’ve never started my own podcast or livestream, but I totally thought I knew the best podcasting gear. It turns out that in the years I’ve spent outfitting my home recording studio with outboard preamps, compressors, and expensive XLR-based microphones, companies have spent tons of research dollars making equipment cheaper and smaller, with pretty incredible results.
I started exploring this more affordable frontier of digital recording and streaming gear, and the takeaway is that it’s easier than ever to produce fantastic-sounding and gorgeous content without emptying your wallet. If you’ve been thinking about starting a podcast or sharing your epic Mario speed runs with the world, here’s what you’ll need.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism.Learn more.
Before You Start
We recommend a lot of gear below, but before you buy anything, think hard about what it is you want to record or livestream. Brainstorm podcast ideas! Block out stories! Think of ways to make your livestreams different from what’s out there. Whether it’s just a hobby or you’re serious about making this a business, good content is always going to be more important than the gear.
You’ll Want a Good Computer
It’s increasingly possible to record podcasts or stream live audio on smartphones, but it’s quicker, easier, and generally more professional to create and stream content on a personal computer. But it doesn’t really matter whether you have a PC or Mac, and the vast majority of modern laptops and desktops are more than fast enough for the audio tasks required.
If you’re only doing audio recording and processing, any one of our favorite laptops (or really any modern computer) will do. (Our laptop buying guide can also help in your search.) However, if you’re planning to livestream video or games from a PC, you will want a powerful computer that allows you to both play the game and run your streaming software. If you plan to record or edit video, you’ll also want a speedy computer for rendering.
Here’s what you’ll need to make sure your podcast or stream sounds the best, from microphones to good headphones.
A USB Microphone
Most mics built into headphones, phones, and laptops do the job for calls and Zoom meetings, but they’re not pristine enough for podcast stories or streams. The easiest way to upgrade your sound quality is with a USB microphone. These plug straight into your computer and allow you to record audio in surprisingly high fidelity given their ease of setup.
There are many good ones but also a sea of weird, off-brand models for sale on retailers like Amazon. Steer clear of those. My favorites come from Blue and JLab Audio. Entry-level mics like the Blue Snowball ($62) and JLab Talk Go ($40) are good beginner options.
If you’ve got multiple people you’re looking to record or stream at once, you’ll want to buy an audio interface. These are external audio cards that plug into a computer via a USB or Thunderbolt port and allow you to use traditional non-USB microphones. They also typically have headphone jacks, so you can listen as you record.
Audio interfaces start at about $100 for one with a single channel (a single input for a microphone or instrument cable), or they can cost several thousand dollars for models with dozens of inputs and other advanced features. The good news is that you don’t need a super-fancy one! My pick for most people is the Focusrite Scarlett 2I2 ($170). It has two mic inputs, comes in a cool red, and sounds good enough for nearly every application you’ll find.
For mics to connect to the interface, snag a Shure SM58($99) or SM57 ($99). These legendary, nearly bulletproof mics sound great, but you can often find them cheaper used. If you want something even fancier, check out the Shure SM7B ($399), which you’ll probably recognize from many of your favorite podcasts, YouTube shows, and streams.
A Portable Recorder
Another great option for recording multiple people or for outdoor locations is a field recorder. WIRED editor Michael Calore uses one to track his vocals for each week’s Gadget Lab podcast, and they sound pretty darn great.
My choice is the Zoom H4N Pro($230), which lets you easily adjust the gain of the mic (how loud it records), and its sensitivity to sounds that aren’t directly in front of it. You can choose to record only what’s in front of (or behind) the mic if you wish.
A Pop Filter
Many microphones come with built-in filters to help keep your p sounds from popping the mic, but if yours doesn’t have one, it might be worth buying a cheap pop filter ($9) that can easily attach to a desk or mic stand. If you’re a streamer, place the mic farther from your face or off to the side, pointed at your mouth—pop filters can sometimes block you from view.
A good pair of studio-style headphones will help limit mic feedback and bleed and will make you look cooler. I like the Audio Technica M50XBT ($199) because you can pair them wirelessly with your phone or plug them into interfaces and other audio production equipment with the included cable. They sound great and last a long time, which is why you’ll often see the standard, non-Bluetooth M50X in studios around the world.
If you’re looking to mix your audio, a nice pair of open-backed headphones like the Monoprice M570 Planar headphones ($300) will help you make sure your mixes translate to different speakers and headphones.
Another option is to grab yourself a good pair of gaming headphones. They don’t sound as amazing as most dedicated USB mics, but they typically have decent sound and look good if you’re streaming video games. Check out our Best Gaming Headsets and Best Wireless Gaming Headsets guides for our current favorites.
A Mic Stand or Mount
Many USB microphones come with built-in stands, but if you’re using non-USB microphones or you want to place your mic somewhere a built-in stand doesn’t allow, there are a bunch of other options. If your mic isn’t too heavy, try an Amazon Basics boom stand ($18). It should be more than adequate for most people.
Many people also have action cams or other, nicer photo gear. If you have a GoPro or other DSLR or mirrorless camera, you can use it as a webcam replacement—we have an explainer on how to do it. Be aware that GoPro’s software is a bit wonky and that DSLR and mirrorless cameras aren’t always well equipped to shoot videos for this long with their sensors open. You’ll want a camera the manufacturer recommends for long-form videos, like the Sony A7 II ($1,398).
A Sturdy Tripod
A small tripod that can hold your camera or smartphone is essential for properly framing shots. There are many good inexpensive ones, but this tripod kit from SmilePowo($16) should have everything you need.
A Smartphone Mic
I’ve been using and enjoying Shure’s microphone and tripod kit ($249) to shoot Instagram Live videos and the occasional video review for WIRED. It captures much better audio quality than the smartphone’s built-in microphone and comes with a nifty tripod, so you can plop it anywhere on your desk.
This works only with iPhones and certain Android phones. I haven’t had any trouble with my Google Pixel 4, but be sure to check it’s compatible with whatever you’re using before you buy one. This kit is also great for live podcasts or outdoor video streams.
Lighting Is Important
A couple of LED lights can be the difference between a beautiful stream and an ugly one. Quality lights can be spendy, but this one from Viltrox($37) comes recommended by WIRED deals contributor Brad Bourque. WIRED associate editor Julian Chokkattu also recommends the Boling P1 ($139). We have more lighting recommendations in our guide on making studio-grade home videos.
If you’re looking at other light options, make sure it has a temperature and brightness adjustment, not just one or the other. If it includes diffusers (for spreading and shaping light) or batteries (for portable use), that’s a plus.
Everyone at WIRED (and nearly all fellow gear nerds we’ve met) are big fans of Moment’s phone cases and lenses. They’re not cheap, but they can really take the images you’re getting on a smartphone to the next level. If you’re shooting a podcast with a static shot, or find yourself using an old phone as a camera, the wide-angle lens is a good way to get everything in the scene. They’re also helpful for on-the-go streams.
Moment lenses require a Moment case on your phone, but the company only supports the top brands like Apple, Google, OnePlus, and Samsung. Be sure to check whether the company makes a case for your phone model.
A Quiet, Dark Place With Soft Stuff
Look for a room with lots of drapes, carpet, and other soft materials that absorb sound if you’re trying for that classic radio DJ sound. As a general rule, try to avoid spaces where you can hear outside noise through windows or doors.
When shooting video, it can be helpful to get blackout curtains or other devices that make your room darker, so that you can control the exact amount of light you want your camera to pick up.
If your software has the ability, look for equalization and compression presets that are designed for male or female voices (often in a dropdown menu). Consider using a “high pass filter,” which can remove annoying low rumbles from things like refrigerators and HVAC systems. Mild compression, which makes the loudest parts of your recording quieter and the quietest parts louder for more “even” overall volume. If you don’t know how to do this in your software, be sure to check out the links below.
Useful Apps and Tutorials
Gear isn’t the only thing you’ll need to get started podcasting or livestreaming. Check out these apps. We’ve also rounded up some of the best YouTube tutorials we could find to help.
Open Broadcaster Software for Streaming (free): If you’re looking to stream to Twitch, YouTube, or nearly any other platform, free streaming software OBS is the industry standard. The open source software is available for macOS, Windows, and Linux and is sponsored by Twitch, Facebook, and Nvidia, among other big-name brands.
Audacity (free): From GarageBand to Pro Tools, there are many great digital audio workstations to record audio with. But the best free option for most beginner podcasters and streamers is Audacity. It has everything you’ll need to edit and upload audio, plus an easy-to-use interface.
YouTube is a great free resource. Here are a few places to get started, but feel free to search the platform for videos that might help.
Near the end of 2009, during the twilight months of a decade that saw the first Black man elected to the US presidency, Ashley Weatherspoon was chasing virality on a young app called Twitter. As the personal assistant for the singer Adrienne Bailon, a former member of the pop groups 3LW and the Cheetah Girls, Weatherspoon often worked on social media strategy. For weeks, she and Bailon had been testing out hashtags on both their feeds to see what would connect with fans. A mild success came with variations on #UKnowUrBoyfriendsCheatingWhen. Later, on a car ride around Manhattan, they began playing with #UKnowUrFromNewYorkWhen. “We started going ham on it,” Weatherspoon told me when we spoke over the phone in June. As the two women were laughing and joking, an even better idea popped into Weatherspoon’s head. “Then I said, oh, ‘You know you’re Black when …’”
It was the first Sunday in September, at exactly 4:25 pm, when Weatherspoon logged on to Twitter and wrote, “#uknowurblackwhen u cancel plans when its raining.” The hashtag spread like wildfire. Within two hours, 1.2 percent of all Twitter correspondence revolved around Weatherspoon’s hashtag, as Black users riffed on everything from car rims to tall tees. It was the viral hit she was after—and confirmation of a rich fabric being threaded together across the platform. Here, in all its melanated glory, was Black Twitter.
More than a decade later, Black Twitter has become the most dynamic subset not only of Twitter but of the wider social internet. Capable of creating, shaping, and remixing popular culture at light speed, it remains the incubator of nearly every meme (Crying Jordan, This you?), hashtag (#IfTheyGunnedMeDown, #OscarsSoWhite, #YouOKSis), and social justice cause (Me Too, Black Lives Matter) worth knowing about. It is both news and analysis, call and response, judge and jury—a comedy showcase, therapy session, and family cookout all in one. Black Twitter is a multiverse, simultaneously an archive and an all-seeing lens into the future. As Weatherspoon puts it: “Our experience is universal. Our experience is big. Our experience is relevant.”
Though Twitter launched exactly 15 years ago today, with the goal of changing how—and how quickly—people communicate online, the ingenious use of the platform by Black users can be traced, in a way, much further back in time. In 1970, when the computer revolution was in its infancy, Amiri Baraka, the founder of the Black Arts Movement, published an essay called “Technology & Ethos.” “How do you communicate with the great masses of Black people?” he asked. “What is our spirit, what will it project? What machines will it produce? What will they achieve?”
For Black users today, Twitter is Baraka’s prophetic machine: voice and community, power and empowerment. To use his words, it has become a space “to imagine—to think—to construct—to energize!!!” What follows is the first official chronicling of how it all came fantastically together. Like all histories, it is incomplete. But it is a beginning. An outline. Think of it as a kind of record of Blackness—how it moves and thrives online, how it creates, how it communes—told through the eyes of those who lived it.
Part I: Coming Together, 2008–2012
As early web forums like BlackVoices, Melanet, and NetNoir fizzled out in the mid-2000s, online spaces that catered to Black interests were scarce. BlackPlanet and MySpace failed to fill the void, andFacebook didn’t quite capture the essence of real-time communication. Users were looking for the next thing.
Kozza Babumba, head of social at Genius: Pre-2007, we had never had a conversation about almost anything. As a community, we didn’t all talk about what it was like when we sang the national anthem. Or what it was like when OJ was driving in that white Bronco. We just watched it on TV.
André Brock, author ofDistributed Blackness: African American Cybercultures: Black people have tried to create social networks and failed. Black people tried to create apps that would aggregate Black people to do certain things, usually for respectable purposes. Those also failed.
Johnetta Elzie, St. Louis activist: I had all those things—BlackPlanet, MySpace, LiveJournal. I was on Facebook when you needed an invite to get on. I was kinda just bored. So I was like, OK, what’s a tweet? What y’all talking about over here?
Brock: It quickly became clear that Twitter was a space for so much more—the shared sense of socializing together, but also the capacity to comment in near real time.
Elzie: Facebook was just slow. Twitter was new and fun.
Babumba: Right as I was joining, I was like, oh, Black people are on here. And we’re bodying it. We’re really going in.
April Reign, diversity and inclusion advocate: Almost immediately I realized this was a subsection of Twitter. And yet it was very clear that we were running the whole thing.
Brandon Jenkins, TV and podcast host: It was so quintessentially Black. It was really like, who’s the funniest?
Judnick Mayard, TV writer and producer: That was the hook that got me. I was like, OK, there’s some funny people on here, and most of those people are Black and also in college. It felt like being on the quad.
Sylvia Obell, host of the podcastOK Now Listen!: I went to an HBCU—North Carolina A&T State University. We were all just talking about things happening on campus.
Cashawn Thompson, educator: I wasn’t on until October ’08, when we were getting ready to elect Obama to his first term. I wanted to know what was going on, and I heard about Twitter.
Jamilah Lemieux, Slate columnist: I joined the day after Obama won. I wasn’t that far out from Howard.
Mayard: People who were working nine-to-fives didn’t have Twitter yet. High school kids weren’t really on Twitter. It was a specific millennial set.
Jenkins: We didn’t know what other people were thinking before Twitter. It was groundbreaking.
Tracy Clayton, host of the podcastStrong Black Legends: It filled a hole.
Brock: We make spaces out of spaces where we were not intended to be. That’s what we do.
It was a sense of community that crystallized on June 25, 2009, when Michael Jackson—never on Twitter himself, but a reliable source of joy, inspiration, and reaction GIFs—was checked into a hospital in Los Angeles.
Denver Sean, editor of LoveBScott.com: I was standing in line for Transformers at a movie theater in Atlanta when the news broke. Everybody was just staring at their phones. You could hear little Twitter chimes popping off. It was the most surreal thing.
Lemieux: It’s the first thing I remember really doing together as a family, if you will.
Sean: They hadn’t pronounced him dead just yet. While my mom and I were in line, I’m trying to keep up with the story. Then it hits that Michael died. Everybody’s phone is buzzing. Back then there were different Twitter apps—TweetDeck and TweetBot, early versions of Twitterific. I had them all and was familiar with the different sounds.
Jenkins: It took ABC News at least an hour before his passing came across the news ticker in Times Square. I remember thinking, “Damn, Twitter broke this news.”
Sean: This was before my mom’s generation took Twitter seriously. This was before people used Twitter as a news source. It was just something the kids were doing.
Obell: We always used to watch it isolated, but to watch it as a family, it felt like a cookout or family reunion type of moment.
Jenkins: It’s like people talking to the movie screen. But this time it was happening on Twitter. We were tweeting in our native tongue about stuff that was native to us.
Clayton: It was a place for me to go to get away from all of the whiteness that I was surrounded by.
God-is Rivera, global director of culture and community at Twitter: I was understanding that there could be a communal experience that is really specific about our lived experience, which is being Black in America as we watch something or as we ingest something.
By the end of 2009 the Black presence on Twitter was undeniable, and the media began referring to it as “Black Twitter”—a term not everyone embraced at first.
Mayard: It’s ironic. I hate the phrase “Black community,” because we’re not a community. Like, I don’t know all you niggas; we ain’t all friends. But at the same time, I love Black Twitter because it is a community that is an actual Black community.
Jenkins: But we weren’t calling it Black Twitter back then. It was just Twitter.
Mayard: Nothing was coming off of Twitter into the media yet. You wouldn’t go there with the purpose of, “I’m about to say something. I want lots of people to hear it.” Because it’s like, who’s gonna hear it? You stuck to your voice because you weren’t really looking for an audience.
Brock: It was a porch kind of space, where people would just congregate with people they knew and talk about things that were passing them by.
Mayard: The first time I realized that we were organized as Black Twitter was #TwitterAfterDark. It was like, oh, these regular people are making dirty jokes and joking about horny motherfuckers.
Reign: Third-shift Twitter was when things would get quite racy—after 11 pm Eastern time.
Mayard: It was the era of the manual retweet, and we were starting to see the phenomenon of people adding onto a joke. The performance that really comes with Black Twitter was starting to happen.
Lemieux: Someone wrote this article about “Late Night Black People Twitter.” That was around the time that people started to coalesce around that terminology. For me, it was a very natural extension of what had been happening on MySpace and via the Black blogosphere.
Brock: A lot of discussions began to pop off around what it meant to be represented as Black in a tech space.
Jenkins: We were having this moment where the rest of the world was realizing that we did things in groups.
Clayton: It annoyed me because it was just such a frenzy around, “Why are the Blacks tweeting?” For me, Twitter was just Twitter. I felt we were being put under a microscope.
Elzie: I’m not even sure why people called Black Twitter that before 2014. Before Ferguson, we seemed to exist in regional pockets. There was STL Twitter, Chicago Twitter, New York Twitter, Atlanta Twitter, Miami, LA, Houston and Dallas. But there was no real, “Hi, this is everyone and we are all Black Twitter.”
Rembert Browne, creative lead for brand and voice at Twitter: At a certain point I remember being out in New York and someone told me that I was part of Black Twitter—and that being surreal, because I didn’t think of myself as part of anything.
Michael Arceneaux, author ofI Don’t Want to Die Poor: Most white people are not around Black people. We don’t really mix outside of work. That’s when I realized: White people are watching. [Laughs.]
As the notion of a “Black Twitter” took off, celebrities, musicians, and other artists joined in, attracting more media attention and more users.
Ashley Weatherspoon, founder of DearYoungQueen.com: I was working for Adrienne Bailon and Fabolous, the rapper, who at the time was the Tweet God. He used to be on Twitter in the beginning like you wouldn’t believe.
Jenkins: Fab used to be the Twitter all-star. It opened up a whole new lane in his persona. He started to become a really humorous character.
Weatherspoon: And in this really unique way we were kind of using the feedback to drive things. For Fab, if something would land with Black Twitter, it would become a line in a verse on a rap song.
Mayard: The only reason to go on Twitter in the beginning was to talk to celebrities. It felt like you were in their head. You wake up and see Questlove has said something to you, and you’re like, whoa.
Weatherspoon: Fab would say a joke, and then three fans would respond and he would retweet those fans. All of a sudden it felt less like fans and celebrities—it felt like the community. It allowed people to just be so open and honest and transparent on the platform because you could be received by a celebrity. Your thought could be received by the head of a network or the head of a record label. There was access to people who had the power to take your sentiment or your thought and bring life to it.
Arceneaux: I don’t think she was on Love & Hip-Hop yet, but Hazel-E was at the time trying to rap. God bless her, but that wasn’t it. I blogged about not liking her video. My mouth is kinda reckless. I can be harsh. And she came at me on Twitter, and we went back and forth.
Clayton: It felt so indulgent to be able to do that. It was like, whoa, I can tweet at Rihanna and tell her what I think, without having to have 18 billion dollars or being in her social circle?
Sean: Rihanna came for Ciara. And then Rihanna clapped back on TLC. She really used to be the queen of social media.
Elzie: It was the Wild Wild West. It was crazy. It was like that GIF where Childish Gambino walks into a room with a pizza box and the room is on fire. That was it.
Entertainment continued to fuel the early years of Black Twitter, culminating in one of the most unlikely events yet: the 2012 premiere of Scandal on ABC.
Obell:Scandal was the show that changed live-tweeting. It was so crazy, it had a Black lead, it was on network TV. I remember thinking, this is going to change how people talk about TV. And it did.
Clayton: The fact that everybody was talking about the show so publicly, it definitely drove up the popularity. I don’t know that I would have watched it without the Black Twitter viewing party.
Obell: That was the first show I remember thinking, “I have to be on Twitter while it’s on.”
Rivera: I’m from New York. Scandal to me was like being in the Magic Johnson Theater in Harlem. If you see a lot of Black people in there, it is usually lit. You ain’t gonna hear most of the movie, but you’re going to have a good time.
Clayton: It got even crazier when Shonda Rhimes herself would jump in on the conversation, answer questions and address somebody’s sassy little tweet. It was one big house party.
Obell: When I think about the earliest day I was ever overwhelmed on Twitter, it was the Scandal finale, when we found out that Eli Pope was Olivia’s dad, which was one of the craziest plot twists to ever happen on the show. I tweeted a photo of me on the floor pretending to be passed out.
Clayton: When you let us do what we want and let us use our voices, when you leave us alone, we do what Black people always do—gather and talk shit and enlighten and be funny and smart.
Brock: Twitter lets the Black community do the thing that it does best, which is signifying—being able to use wordplay or turns of phrase to change the meaning of certain events, to either be more humorous or more critical. Many immigrant communities have a form of signifying. But for some reason, the way Black folk do it on Twitter has really taken off and has really become definitive of what internet culture is.
Of course, it wasn’t all champagne and good times. The burgeoning cultural force that was Black Twitter came with its share of problems—some downright nasty.
Reign: Our culture is appropriated all the time, in every industry, in a myriad of ways, and that is also true within Black Twitter.
Clayton: I wasn’t as aware of how it was going to become a bastion of cultural appropriation.
Reign: I think of Peaches Monroee, who created “on fleek” with that viral Vine. There’s so many examples of how Black Twitter has been undermonetized for years, and yet others have been able to make entire careers off of our brilliance.
Clayton: It really changed the way in which Black culture gets discovered by white folks—and then quickly incorporated into ads and TV shows with white people making money off of it.
Mayard: It is the first time in history that we have digital proof that y’all copy us. Every single thing that we do.
Reign: There are issues with respect to various brands taking our ideas and running with them. There are issues with social media accounts that are clearly not run by Black people attempting to use African American Vernacular English and getting it way wrong. All of that is an issue.
Lemieux: Especially when you get any modicum of visibility as a Black woman, your Twitter experience falls apart.
Reign: Things change not always for the better when you have more followers. This is a running joke—more followers, more problems.
Lemieux: From the beginning of Twitter, it was absolutely fucked up to be a Black feminist on there. There’s a target on your head. Twitter gave a microphone to people who might not otherwise have had it, but it didn’t come with instructions on how you cope with strangers gossiping about the details of your personal life, or how you deal with death threats.
Mayard: When we come into a space, everyone is trying to figure out how to measure up next to us. And that is a lot of what causes resentment for our presence. That causes people to be mad that we are present.
Then, on February 26, 2012, a young boy in Sanford, Florida, was fatally shot on his walk back from the local 7-Eleven.
Rivera: I was a new mom. My daughter just started day care, and I was back at work. I was racing down the highway. I had to be at day care by 6:30 pm, and I would listen to The Michael Baisden Show on Sirius. He was talking about this mother, Sybrina Fulton, who had been calling in to just try and get some national recognition around this man who had killed her son in Florida. She was so upset because this man hadn’t even been arrested, and there was no answer.
Babumba: It was our beautiful boy down there—Trayvon Martin.
Rivera: When I finally figured out his name, I put it into Google. That was when I saw a tweet come up. People were tweeting about what was happening. That was the only place I found it.
Arceneaux: It was very specific to Twitter.
Rivera: I started seeing people whose blogs I followed—people like Jamilah Lemieux, Michael Arceneaux, and Demetria Lucas—talking about this thing. It was specifically Black people who were outraged about what was happening to this mother down in Florida.
Mayard: You saw Black people calling white people’s bullshit in real time because we were experiencing in-real-time pain.
Weatherspoon: All of a sudden we as Black Twitter and a community were able to put the heat and the pressure on them.
Rivera: I remember being a little nervous to say how I really felt. It was through discovering the story that I was just like, I can’t be silent about this. Watching other Black people speak their truth, it gave me the courage to be like, well, even if nobody’s listening to me, I am going to say how I feel.
What users couldn’t foresee, even as they continued to speak up, was that an even bigger storm was on the horizon. The fight for Black lives would become the most transformational movement not simply for Twitter but for the entire country. Nothing would ever be the same again.
Part II will be published on July 22.
Photographs: Thomas Barwick/Getty Images; Barry Chin/The Boston Globe/Getty Images; Danny Feld/Walt Disney Television/Getty Images; Charley Gallay/Getty Images; Jean-Marc Giboux/Liaison/Getty Images; Slaven Vlasic/Getty Images; Hamza Khan/Alamy
Let us know what you think about this article. Submit a letter to the editor email@example.com.