Law and Legal
Yale Environment 360: “Two new videos visualize how drastically global temperatures have changed since 1900 — and how much worse they will get by the end of this century. The data visualizations, created by Antti Lipponen, a research scientist at the Finnish Meteorological Institute, depict 200 years of climate change in each of the world’s 191 countries in less than a minute.
“Rapid global warming really exists, has been global in the past, and has affected all the countries in the world,” Lipponen told Yale Environment 360. “Unfortunately, the future does not look different — temperatures will continue rising rapidly and all countries will be affected by climate change.”…
Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice
Richardson, Rashida and Schultz, Jason and Crawford, Kate, Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice (February 13, 2019). New York University Law Review Online, Forthcoming. Available at SSRN in PDF: “Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.
In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data used to train or implement these systems across thirteen case studies. We highlight three of these: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices suggests an extremely high risk that dirty data was or will be used in any predictive policing application, and (3) Maricopa County where despite extensive evidence of dirty policing practices, lack of transparency and public accountability surrounding predictive policing inhibits the public from assessing the risks of dirty data within such systems. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. Thus, for any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with skepticism and mechanisms for the public to examine and reject such systems are imperative.”
Via Sapping Attention blog: “I periodically write about Google Books here, so I thought I’d point out something that I’ve noticed recently that should be concerning to anyone accustomed to treating it as the largest collection of books: it appears that when you use a year constraint on book search, the search index has dramatically constricted to the point of being, essentially, broken. Here’s an example. While writing something, I became interested in the etymology of the phrase ‘set in stone.’ Online essays seem to generally give the phrase an absurd antiquity–they talk about Hammurabi and Moses, as if it had been translated from language to language for decades. I thought that it must be more recent–possibly dating from printers working with lithography in the 19th century. So I put it into Google Ngrams. As it often is, the results were quite surprising; about 8,700 total uses in about 8,000 different books before 2002, the majority of which are after 1985. Hammurabi is out, but lithography doesn’t look like a likely origin for widespread popularity either…”
Axios: “Researchers have broadened the controversial technology called “deepfakes” — AI-generated media that experts fear could roil coming elections by convincingly depicting people saying or doing things they never did, Axios’ Kaveh Waddell reports.
- A new computer program, created at the San Francisco-based OpenAI lab, is the latest front in deepfakes, producing remarkably human-sounding prose that opens the prospect of fake news circulated at industrial scale.
- How it works: The program “writes” by choosing the best next word based on both the human-written prompt and an enormous database of text it has read on the internet.
- An important point: The AI writer can only make stuff up. It can’t tell the difference between a fact and a lie, which is part of what makes it volatile.
- Go deeper … AI wrote this Axios story..”
Quartz: “More than 60 years after philosopher Ludwig Wittgenstein’s theories on language were published, the artificial intelligence behind Google Translate has provided a practical example of his hypotheses. Patrick Hebron, who works on machine learning in design at Adobe and studied philosophy with Wittgenstein expert Garry Hagberg for his bachelor’s degree at Bard College, notes that the networks behind Google Translate are a very literal representation of Wittgenstein’s work.n Google employees have previously acknowledged that Wittgenstein’s theories gave them a breakthrough in making their translation services more effective, but somehow, this key connection between philosophy of language and artificial intelligence has long gone under-celebrated and overlooked.
Crucially, Google Translate functions by making sense of words in their context. The translation service relies on an algorithm created by Google employees called word2vec, which creates “vector representations” for words, which essentially means that each word is represented numerically. For the translations to work, programmers have to then create a “neural network,” a form of machine learning, that’s trained to understand how these words relate to each other. Most words have several meanings (“trunk,” for example, can refer to part of an elephant, tree, luggage, or car, notes Hebron), and so Google Translate has to understand the context. The neural network will read millions of texts, focusing on the two words preceding and following on from any one word, so as to be able to predict a word based on the words surrounding it. The artificial intelligence calculates probabilistic connections between each word, which form the coordinates of an impossible-to-imagine multi-dimensional vector space…”
Nautre – Researchers have been left without access to new papers as libraries and the major publisher fail to agree on subscription deals. “Researchers at German institutions that have let their Elsevier subscriptions lapse while negotiating a new deal are hitting the paywall for the publisher’s most recent articles around 10,000 times a day, according to Elsevier — which publishes more than 400,000 papers each year. But at least some German libraries involved in negotiating access to Elsevier say they are making huge savings without a subscription, while still providing any articles their academics request. A major stumbling block to getting deals signed is institutions’ desire to combine the price they pay for subscriptions to pay-walled journals with the cost that libraries and researchers pay to make articles open-access…”
Social Media Explorer: “There’s enough social media monitoring tools on the market to get you absolutely confused. This list is here to help. Every tool on the list does what it claims to do (which is not universal among software and products in general) – it either focuses on social media monitoring exclusively or does social media monitoring as a part of a broader toolkit. When in the right hands, it will definitely help improve customer service, raise brand awareness, and prevent a social media crisis. And some of the tools do even more than that…”
Poynter: “In mid-March, a European Commission high-level group published its final report on misinformation, drawing upon the input of experts from around the world who gathered over several weeks to help the European Union figure out what to do about misinformation. The report created by the high-level group — announced in November to help the EU craft policies to address growing concern about misinformation in Europe — contains an inclusive, collaborative approach to addressing misinformation around the world (Disclosure: Poynter attended the meetings as one of the experts). The report, while imperfect, explicitly recommends not regulating against misinformation — but the EU is only one of many governing bodies that have sought to stem the flow of online misinformation over the past few months. Spanning from Brazil to South Korea, these efforts raise questions about infringing free speech guarantees and are frequently victims of uncertainty. The muddying of the definition of fake news, the relative reach of which is still being studied, hinders governments’ ability to accomplish anything effective. In the spirit of this confusion, explained in detail in a recent Council of Europe report, Poynter has created a guide for existing attempts to legislate against what can broadly be referred to as online misinformation. While not every law contained here relates to misinformation specifically, they’ve all often been wrapped into that broader discussion. We have attempted to label different interventions as clearly as possible. Since these efforts seem to be announced weekly, this article will be updated on an ongoing basis. If you catch an error or know of an update in one of our summaries, email firstname.lastname@example.org or use the Google Form at the bottom of this page and we’ll update as soon as possible.”
- “It may be getting easier to link your private and anonymized DNA data to your identity.
- That means the genetic data you share with a testing company — which may include sensitive health information like your risk of cancer — could one day be matched with your name by an unintended party.
- While some at-home DNA tests like 23andMe have privacy protocols to protect against this, they’re not a guarantee, experts say. Other companies have fewer safeguards.
- One key issue is the ability for users to upload their private DNA data to publicly-accessible genetic databases like the one used in the Golden State Killer case…” [h/t Pete Weiss]
Perkins Coie – “The International Swaps and Derivatives Association (ISDA) has published the first in a series of guidelines for what it colloquially refers to as “smart derivatives contracts” (the Guidelines).* A smart derivatives contract is a derivative that incorporates software code to automate aspects of the derivative transaction and operates on a distributed ledger, such as a blockchain. This series of papers is intended to “provide high-level guidance on the legal documentation and framework that currently governs derivatives trading, and to point out certain issues that may need to be considered by technology developers when introducing technology into that framework.”Derivatives have long been thought to be a fitting use case for smart contract solutions. It is little surprise that derivatives industry incumbents and startups alike are working on novel smart contract solutions to facilitate the execution and clearing of derivatives. Smart derivatives contracts have the potential to create significant efficiencies in the derivatives market by automating the performance of obligations and operations under a derivatives contract. Derivatives settlement is largely reliant upon conditional logic informed by certain data points that can be made available via oracle.However, swaps trading under lengthy master agreements and at heavy volumes add significant complications. There can be many barriers to smart contract implementation because of the complexity of the legal frameworks, the significant payment netting that occurs, subjective elements of event of default provisions, as well as other costs, fragmentation risks, and technological issues.ISDA recognizes these complexities and has discussed this with industry participants to determine where it can lend assistance as companies experiment with smart derivatives contracts. ISDA and derivatives traders have voiced concerns about technology developers being sufficiently aware that the legal terms of the ISDA Master Agreement, the supporting documentation, and each individual transaction that sits underneath it, should be appropriately incorporated and not be disturbed without due legal consideration and advice on the potential impact. As a result, ISDA has initiated a series of Guidelines designed to illuminate the core principles of ISDA documentation – and certain important legal terms – that should be maintained when technology is applied to derivatives trading, in a comprehensive fashion with a practical approach.”
The Guardian Data Visualization – How Brexit has created four new political factions – Analysis of Commons voting patterns show how Europhobe and Europhile rebels from both main parties are forming new parliamentary blocs
“Our study clusters MPs by the similarity of their voting patterns: if two MPs always vote the same way, the chart groups them tightly together. The patterns on key Brexit votes reveal the emergence of four cross-party political factions that are wrangling for control of the negotiations. A cross-party group of pro-European MPs usually votes with each other, with or against their own frontbenches, while Europhobe Conservatives now constitute a party within the party. Search for your MP to see which faction they were most closely aligned with as the Brexit votes unfolded…”
National Geographic – In 1792, leading architects entered a competition to build the President’s House, George Washington judged it, and the winner built an American icon. “It may seem like Washington, D.C. was the perfect spot for the U.S. captial, but its selection was controversial. Secretary of the Treasury Alexander Hamilton and others wanted the capital to be located in a northern commercial center. Southern leaders proposed that the federal city be built in an agricultural region to avoid concentrating financial and political power. Businessmen in Philadelphia and New York sought to lure the president by building great residences for him, but George Washington selected a site currently located between Virginia and Maryland on the Potomac River. He believed that the location would be the seed for a great capital city, the equal of Paris or London...”
The New York Times Magazine – The Secret History of Women in Coding – Computer programming once had much better gender balance than it does today. What went wrong? by Clive Thompson (adapted from “Coders: The Making of a New Tribe and the Remaking of the World,” available March 26, 2019): “When digital computers finally became a practical reality in the 1940s, women were … pioneers in writing software for the machines. At the time, men in the computing industry regarded writing code as a secondary, less interesting task. The real glory lay in making the hardware. … If we want to pinpoint a moment when women began to be forced out of programming, we can look at one year: 1984. A decade earlier, a study revealed that the numbers of men and women who expressed an interest in coding as a career were equal. … From 1984 onward, the percentage dropped; by the time 2010 rolled around, … 17.6 percent of the students graduating from computer-science and information-science programs were women. One reason … has to do with a change in how and when kids learned to program. … Once the first generation of personal computers, like the Commodore 64 or the TRS-80, found their way into homes, teenagers were able to play around with them [before entering college] … By the mid-’80s, some college freshmen … were remarkably well prepared. … [T]hese students were mostly men, as two academics discovered when they looked into the reasons women’s enrollment was so low…”
Quartz: “There are gender pay gaps … and then there are median gender pay gaps. Understanding the difference between the two may determine just how much progress women make in terms of fairer compensation in the next decade. So first, the definitions: “Equal pay” gap: What women are paid versus their direct male peers, statistically adjusted for factors such as job, seniority, and geography. Often referred to in the context of “equal pay for equal work.” “Median pay” gap: The median pay of women working full time versus men working full time. This is an unadjusted raw measure used by the Organization for Economic Cooperation and Development (OECD). Women in the US, for example, make 80 cents on the dollar versus men on this basis.
Equal pay gaps measure whether women are being paid commensurate with their peers for the work they are doing today. But median pay gaps measure whether or not women are holding as many high-paying jobs as men. Narrowing the median pay gap means putting more women in leadership (and reaping the performance benefits that diversity affords). And that’s where investors come in. Concerned shareholders in major US financial and tech companies want to make sure the pay gap difference is understood—and acted upon. Consider the case of Citigroup. While it is true that women at Citi are paid 99% of what men are paid on an equal-pay basis when adjusting for job function, level, and geography, the median pay gap at the financial giant paints a very different picture: Women at Citigroup earn just 71% of what the men earn…”
Shane Parrish – Farnam Street – The Dying Art of Conversation: My Interview with Author and Speaker Celeste Headlee [The Knowledge Project Ep. #51 – Podcast] “Speaker, author and radio journalist Celeste Headlee has had decades of experience fine tuning the recipe for engaging and rewarding conversation. She shares some tips to help us instantly improve our conversational skills and meaningfully connect with others.”
Interactive map shows what the climates of 540 urban areas in US and Canada will feel like in 60 years
The University of Maryland: “The map was created by Matt Fitzpatrick at the University of Maryland Center for Environmental Science and Robert Dunn of North Carolina State University [previously], who have also published an accompanying paper that details their methods for climate-analog mapping. In general, the closest analogs for future North American climates are to the south. But due to changing precipitation patterns significant eastward or westward shifts may also be involved. And for higher altitude cities, the nearest equivalent future climate may even exist to the north at lower elevations. The map and study look at two different scenarios: a business-as-usual future with no significant cuts to greenhouse gas emissions, and a moderate reduction in emissions as envisioned under the Paris Agreement.
“Under the business as usual emissions the average urban dweller is going to have to drive nearly 1,000 km to the south to find a climate like that expected in their home city by 2080,” said Fitzpatrick. “Not only is climate changing, but climates that don’t presently exist in North America will be prevalent in a lot of urban areas.”
“The American Association of Law Libraries (AALL) is advocating for the passage of the Electronic Court Records Reform Act, introduced in the U.S. House of Representatives today by House Judiciary Committee Ranking Member Doug Collins (R-Ga.) and Congressman Mike Quigley (D-Ill.), chair of the Congressional Transparency Caucus. This legislation would, for the first time, allow free access to electronic federal court records through the Public Access to Court Electronic Records (PACER) system and improve the efficiency and transparency of the courts. AALL coordinated a letter signed by 15 other organizations—including the American Civil Liberties Union, the Data Coalition, and the Project on Government Oversight—urging passage of the bill. “Access to the law, and information about the law, is the cornerstone of any democracy. The American Association of Law Libraries has long advocated for no-fee access to federal court records through PACER,and the Electronic Court Records Reform Act would finally make that vision a reality,” said Femi Cadmus,president of AALL. “Eliminating PACER fees will improve transparency of the courts and allow law libraries to preserve and provide access to court records.
We urge Congress to enact this legislation.”The Electronic Court Records Reform Act would: Consolidate the case management/electronic case files system and require all documents in the system be searchable, machine-readable and available to the public and to parties before the court free of charge; Protect private information, requiring the courts to redact any information prohibited from public disclosure…”
“It’s a continuing love story for most owners and their vehicles as overall dependability for three-year-old vehicles improves 4% from last year, according to the J.D. Power 2019 U.S. Vehicle Dependability Study (SM) (VDS). “Vehicle dependability continues to improve, but I wouldn’t say that everything is rosy,” said Dave Sargent, Vice President of Global Automotive at J.D. Power. “Vehicles are more reliable than ever, but automakers are wrestling with problems such as voice recognition, transmission shifts and battery failures. Flawless dependability is a determining factor in whether customers remain loyal to a brand, so manufacturers need to help customers who are currently experiencing vehicle problems and address these trouble spots on future models.” The study, now in its 30th year, measures the number of problems experienced per 100 vehicles (PP100) during the past 12 months by original owners of three-year-old model-year vehicles. The 2019 study measures problems in model year 2016 vehicles. A lower score reflects higher quality, and the study covers 177 specific problems grouped into eight major vehicle categories…”
Fortune 100 Best Companies to Work For – This year’s annual list of best companies to work for features Hilton in the top spot. But the companies on this list belong to a variety of industries, from grocery chains to tech organizations. Fortune research partner Great Place to Work evaluated everything from company perks to opportunities for innovation for this year’s list. Learn more about the companies – here.
“Innovation by all. How do you encourage it? How do you harness it? And most important, how do you make sure you’re not stifling it? As we talked to top-performing companies of every size and across every industry on our 22nd annual list, the challenge of getting the best ideas from all your employees is the theme that came up more than any other. One obvious example is at our new No. 1: Hilton. Relying on a Millennial Team Member Resource Group is just one of the ways this 100-year-old hospitality company is making sure all employees (in this case, its youngest) get a chance to contribute their best ideas. Attempting to “actively solicit input, new ideas, learnings, and experiences” has become paramount, says Hilton’s chief human resources officer Matthew Schuyler.
Elsewhere on our list, Cisco (No. 6) is developing more and more programs to seed innovation, such as an annual companywide competition in which employees can “invest” tokens in the best ideas (the contest has led to seven proofs of concept and eight patents). Indeed, this list includes dozens of role models for encouraging innovation (which is also the theme of our February Great Place to Work for All Summit). Still, we wondered: Could this magic formula be quantified?…”
The Washington Post: “In a month, the National Weather Service plans to launch its “next generation” weather prediction model with the aim of “better, more timely forecasts.” But many meteorologists familiar with the model fear it is unreliable. The introduction of a model that forecasters lack confidence in matters, considering the enormous impact that weather has on the economy, valued at around $485 billion annually. The Weather Service announced Wednesday that the new model, known as the FV3 (which stands for Finite Volume Cubed-Sphere dynamical core), is “tentatively” set to become the United States’ primary forecast model on March 20, pending tests. It will replace the current version of the GFS, popularly known as the American model, which has existed in various forms for more than 30 years. The introduction of the FV3 is intended as the Weather Service’s next step toward building the best weather prediction model in the world, a stated priority of the Trump administration. The current GFS model trails the European model in accuracy, and it has for many years, despite millions of dollars in congressional funding dating back to 2012, after Hurricane Sandy hit.
Numerous meteorologists who have experience using the FV3 worry it’s not ready for prime time and have been underwhelmed by its performance. For months, its predictions have been publicly available, on an experimental basis for forecasters to evaluate. When news broke about the Weather Service’s intention to make the FV3 the United States’ primary model, meteorologists unleashed a torrent of complaints and negative reviews on Twitter…”