When someone who is living paycheck to paycheck falls victim to an online fraud or a breach, the cascade of repercussions can be devastating.
One of the tragic ironies of the digital age is this: Despite the fact that low-income Americans have experienced a long history of disproportionate surveillance, their unique privacy and security concerns are rarely visible in Washington and Silicon Valley. As my colleague Michele Gilman has noted, the poor often bear the burden of both ends of the spectrum of privacy harms; they are subjected to greater suspicion and monitoring when they apply for government benefits and live in heavily policed neighborhoods, but they can also lose out on education and job opportunities when their online profiles and employment histories aren’t visible (or curated) enough.
The poor experience these two extremes — hypervisibility and invisibility — while often lacking the agency or resources to challenge unfair outcomes. For instance, they may be unfairly targeted by predictive policing tools designed with biased training data or unfairly excluded from hiring algorithms that scour social media networks to make determinations about potential candidates. In this increasingly complex ecosystem of “networked privacy harms,” one-size-fits-all privacy solutions will not serve all communities equally. Efforts to create a more ethical technology sector must take the unique experiences of vulnerable and marginalized users into account.
I led Pew Research Center’s research on understanding how Americans’ attitudes toward privacy were affected by Edward Snowden’s leak of documents about widespread government surveillance by the National Security Agency. Through surveys and focus group interviews, I started to see that part of the untold story in the research and policy community was the way in which low-income communities experience privacy-related harms differently.
What we found in anationally representative survey of 3,000 American adults was striking. Not only did Americans with lower levels of income and education have fewer technology resources and lower levels of confidence in their ability to protect their digital data, but they also expressed heightened sensitivities about a range of overlapping offline privacy and security harms. This helped to illustrate a critical dimension of digital inequality that is often overlooked; the poor must navigate a matrix of privacy and security vulnerabilities in their daily lives — any of which could dramatically upend their financial, professional or social well-being. For example, when someone who is living paycheck to paycheck falls victim to an online fraud or loses the ability to use his or her smartphone after it gets hacked, the cascade of repercussions can be devastating
We found that not only are low-income Americans more concerned than their wealthier counterparts about losing control over how their information is collected or being used, but they’re also more worried about being harassed online or having their financial information stolen. And while societal fears about data breaches are widespread, identity theft poses a much heavier burden for people living on the margins.
At the same time, the backdrop for these online experiences is a heightened sense of worry about the precariousness of their physical privacy and security. For instance, low-income Americans, particularly in communities of color, are significantly more likely than higher-income groups to express concerns about being unfairly targeted by law enforcement. This kind of targeting may take the form of warrantless cellphone location tracking that results in wrongful arrests or pervasive networks of cameras and sensors that monitor all of the public activity in low-income neighborhoods in a constant search for suspicious activity.
The story of income inequality and differential surveillance practices in America is also deeply intertwined with the history of racial inequalities. In addition to understanding the differing concerns of economically marginalized groups, it’s critical to understand how different racial and ethnic groups experience privacy. From the government surveillance of black civil rights leaders in the 1960s to the surveillance of Black Lives Matter protesters on social media today, there are myriad examples of communities of color enduring a disproportionate level of scrutiny when compared with white Americans engaged in the same kinds of activities.
More recently, the ongoing government tracking of the foreign-born Hispanic population — which is also among the poorest and least-educated group of adults in the country — has resulted in raids and deportations that have separated family members and created a climate of widespread fear. This mass surveillance is causing eligible families not to apply for life-sustaining supports like food stamps, to avoid getting the health care they need and to pull their children out of school.
What does this surveillance look like in practice? We recently learned that in Washington State, at least once per day over a period of two and a half years, agents with the Immigration and Customs Enforcement agency would, without warrants, obtain the names, birth dates, identification data, room assignments and license plate numbers of guests at several Motel 6 locations. They would then highlight the names of guests that “sounded Latino” to target them for questioning, detainment and deportation. Motel 6 was ultimately ordered to pay $12 million dollars in damages to guests who were affected. But the scale of this kind of targeting and discrimination pales in comparison with the kind of automated surveillance of social media, search and facial recognition data that have been pursued in support of President Trump’s “zero tolerance” policy on immigration.
Our research suggests that low-income Americans, and in particular, foreign-born Hispanic adults, are disproportionately reliant on mobile devices as their primary source of internet access. While internet connectivity has become essential to these communities, it also creates privacy and security vulnerabilities that they don’t feel prepared to navigate. The survey findings illustrate a substantial demand for educational resources among low-socioeconomic-status groups, but many feel as though it would be difficult to get access to the tools and strategies they would need to learn more about protecting their personal information online.
As we approach the 2020 census, and the first time ever that the bureau will be asking a majority of people to answer the census online, it is critically important that we do more to address the lack of confidence that low-income and marginalized communities have in the integrity of our data ecosystem more broadly. When those who influence policy and technology design have a lower perception of privacy risk themselves, it contributes to a lack of investment in the kind of safeguards and protections that vulnerable communities both want and urgently need.
Date: May 01, 2019
Source: The New York Times