Tech-n-law-ogy

Open Call for Fellowship Applications, Academic Year 2018-2019

About the Fellowship ProgramQualificationsCommitment to Diversity •  Logistics
Stipends and BenefitsAbout the Berkman Klein CenterFAQ
Required Application MaterialsApply!


The Berkman Klein Center for Internet & Society at Harvard University is now accepting fellowship applications for the 2018-2019 academic year through our annual open call. This opportunity is for those who wish to spend 2018-2019 in residence in Cambridge, MA as part of the Center's vibrant community of research and practice, and who seek to engage in collaborative, cross-disciplinary, and cross-sectoral exploration of some of the Internet's most important and compelling issues.
 

Applications will be accepted until Wednesday, January 31, 2018 at 11:59 p.m. Eastern Time.
 

We invite applications from people working on a broad range of opportunities and challenges related to Internet and society, which may overlap with ongoing work at the Berkman Klein Center and may expose our community to new opportunities and approaches. We encourage applications from scholars, practitioners, innovators, engineers, artists, and others committed to understanding and advancing the public interest who come from -- and have interest in -- countries industrialized or developing, with ideas, projects, or activities in all phases on a spectrum from incubation to reflection.


Through this annual open call, we seek to advance our collective work and give it new direction, and to deepen and broaden our networked community across backgrounds, disciplines, cultures, and home bases. We welcome you to read more about the program below, and to consider joining us as a fellow!

About the Berkman Klein Fellowship Program

“The Berkman Klein Center's mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions.


We are a research center, premised on the observation that what we seek to learn is not already recorded. Our method is to build out into cyberspace, record data as we go, self-study, and share. Our mode is entrepreneurial nonprofit.”


Inspired by our mission statement, the Berkman Klein Center’s fellowship program provides an opportunity for some of the world’s most innovative thinkers and changemakers to come together to hone and share ideas, find camaraderie, and spawn new initiatives. The program encourages and supports fellows in an inviting and playful intellectual environment, with community activities designed to foster inquiry and risk-taking, to identify and expose common threads across fellows’ individual activities, and to bring fellows into conversation with the faculty directors, employees, and broader community at the Berkman Klein Center.  From their diverse backgrounds and wide-ranging physical and virtual travels, Berkman Klein Center fellows bring fresh ideas, skills, passion, and connections to the Center and our community, and from their time spent in Cambridge help build and extend new perspectives and actions out into the world.


A non-traditional appointment that defies any one-size-fits-all description, each Berkman Klein fellowship carries a unique set of opportunities, responsibilities, and expectations based on each fellow’s goals. Fellows appointed through this open call come into their fellowship with a personal research agenda and set of ambitions they wish to conduct while at the Center. These might include focused study or writing projects, action-oriented meetings, the development of a set of technical tools, capacity building efforts, testing different pedagogical approaches, or efforts to intervene in public discourse and trialing new platforms for exchange.  Over the course of the year fellows advance their research and contribute to the intellectual life of the Center and fellowship program activities; as they learn with and are influenced by their peers, fellows have the freedom to change and modify their plans.


Together fellows actively design and participate in weekly all-fellows sessions, working groups, skill shares, hacking and development sessions, and shared meals, as well as joining in a wide-range of Berkman Klein Center events, classes, brainstorms, interactions, and projects. While engaging in both substance and process, much of what makes the fellowship program rewarding is created each year by the fellows themselves to address their own interests and priorities. These entrepreneurial, collaborative ventures – ranging at once from goal-oriented to experimental, from rigorous to humorous – ensure the dynamism of a fellowship experience, the fellowship program, and the Berkman Klein community.  As well, the Center works to support our exemplary alumni network, and beyond a period of formal affiliation, community members maintain ongoing active communication and mutual support across cohorts.


Alongside and in conversation with the breadth and depth of topics explored through the Center’s research projects, fellows engage the fairly limitless expanse of Internet & society issues. Within each cohort of fellows we encourage and strive for wide inquisition and focused study, and these areas of speciality and exploration vary from fellow to fellow and year to year. Some broad issues of interest include (but are not limited to) fairness and justice; economic growth and opportunity; the ethics and governance of artificial intelligence; equity, agency, inclusion, and diversity; health; security; privacy; access to information; regulation; politics; and democracy. As fields of Internet and society studies continue to grow and evolve, and as the Internet reaches into new arenas, we expect that new areas of interest will emerge across the Center as well. We look forward to hearing from potential fellows in these nascent specialities and learning more about the impact of their work.

back to top
Qualifications

We welcome applications from people who feel that a year in our community as a fellow would accelerate their efforts and contribute to their ongoing personal and professional development.
 

Fellows come from across the disciplinary spectrum and different life paths. Some fellows are academics, whether students, post-docs, or professors. Others come from outside academia, and are technologists, entrepreneurs, lawyers, policymakers, activists, journalists, educators, or other types of practitioners from various sectors. Many fellows wear multiple hats, and straddle different pursuits at the intersections of their capacities. Fellows might be starting, rebooting, driving forward in, questioning, or pivoting from their established careers.  Fellows are committed to spending their fellowship in concert with others guided by a heap of kindness, a critical eye, and a generosity of spirit.


The fellowship selection process is a multi-dimensional mix of art and science, based on considerations that are specific to each applicant and that also consider the composition of the full fellowship class. Please visit our FAQ to learn more about our selection criteria and considerations.

To learn more about the backgrounds of our current community of fellows, check out our fall video series with new fellows, 2017-2018 community announcement, read their bios, and find them on Twitter. As well, other previous fellows announcements give an overview of the people and topics in our community: 2016-2017, 2015-2016, 2014-2015, 2013-2014.

back to top
 

Commitment to Diversity

The work and well-being of the Berkman Klein Center for Internet & Society are profoundly strengthened by the diversity of our network and our differences in background, culture, experience, national origin, religion, sexual orientation, gender, gender identity, race, ethnicity, age, ability, and much more. We actively seek and welcome people of color, women, the LGBTQIA+ community, persons with disabilities, and people at intersections of these identities, from across the spectrum of disciplines and methods. In support of these efforts, we are offering a small number of stipends to select incoming fellows chosen through our open call for applications.  More information about the available stipends may be found here. More information about the Center’s approach to diversity and inclusion may be found here.

back to top
 

Logistical Considerations

While we embrace our many virtual connections, spending time together in person remains essential. In order to maximize engagement with the community, fellows are encouraged to spend as much time at the Center as they are able, and are expected to conduct much of their work from the Cambridge area, in most cases requiring residency. Tuesdays hold particular importance--it is the day the fellows community meets for a weekly fellows hour, as well as the day the Center hosts a public luncheon series; as a baseline we ask fellows to commit to spending as many Tuesdays at the Center as possible.


Fellowship terms run for one year, and we generally expect active participation from our fellows over the course of the academic year, roughly from the beginning of September through the end of May.
 

In some instances, fellows are re-appointed for consecutive fellowship terms or assume other ongoing affiliations at the Center after their fellowship.

back to top 

Stipends and Access to University Resources

Stipends

Berkman Klein fellowships awarded through the open call for applications are rarely stipended, and most fellows receive no direct funding through the Berkman Klein Center as part of their fellowship appointment.


To make Berkman Klein fellowships a possibility for as wide a range of applicants as possible, in the 2018-2019 academic year we will award a small number of stipends to select incoming fellows chosen through our open call for applications. This funding is intended to support people from communities who are underrepresented in fields related to Internet and society, who will contribute to the diversity of the Berkman Klein Center’s research and activities, and who have financial need. More information about this funding opportunity can be found here.


There are various ways fellows selected through the open call might be financially supported during their fellowship year. A non-exhaustive list: some fellows have received external grants or awards in support of their research; some fellows have received a scholarship or are on sabbatical from a home institution; some fellows do consulting work; some fellows maintain their primary employment alongside their fellowship. In each of these different scenarios, fellows and the people with whom they work have come to agreements that allow the fellow to spend time and mindshare with the Berkman Klein community, with the aim to have the fellow and the work they will carry out benefit from the affiliation with the Center and the energy spent in the community. Fellows are expected to independently set these arrangements with the relevant parties.
 

Office and Meeting Space

We endeavor to provide comfortable and productive spaces for for coworking and flexible use by the community. Some Berkman Klein fellows spend every day in our office, and some come in and out throughout the week while otherwise working from other sites. Additionally, fellows are supported in their efforts to host small meetings and gatherings at the Center and in space on the Harvard campus.
 

Access to University Resources

  • Library Access: Fellows are able to acquire Special Borrower privileges with the Harvard College Libraries, and are granted physical access into Langdell Library (the Harvard Law School Library).  Access to the e-resources is available within the libraries.  

  • Courses: Berkman Klein fellows often audit classes across Harvard University, however must individually ask for permission directly from the professor of the desired class.  

  • Benefits: Fellows appointed through the open call do not have the ability to purchase University health insurance or get Harvard housing.

back to top 

Additional Information about the Berkman Klein Center

The Berkman Klein Center for Internet & Society at Harvard University is dedicated to exploring, understanding, and shaping the development of the digitally-networked environment. A diverse, interdisciplinary community of scholars, practitioners, technologists, policy experts, and advocates, we seek to tackle the most important challenges of the digital age while keeping a focus on tangible real-world impact in the public interest. Our faculty, fellows, staff and affiliates conduct research, build tools and platforms, educate others, form bridges and facilitate dialogue across and among diverse communities. More information at https://cyber.harvard.edu.

To learn more about the Center’s current research, consider watching a video of the Berkman Klein Center’s Faculty Chair Jonathan Zittrain giving a lunch talk from Fall 2017, and check out the Center’s most recent annual reports.
back to top

Frequently Asked Questions

To hear more from former fellows, check out 15 Lessons from the Berkman Fellows Program, a report written by former fellow and current Fellows Advisory Board member David Weinberger. The report strives to "explore what makes the Berkman Fellows program successful...We approached writing this report as a journalistic task, interviewing a cross-section of fellows, faculty, and staff, including during a group session at a Berkman Fellows Hour. From these interviews a remarkably consistent set of themes emerged."

 

More information about fellows selection and the application process can be found on our Fellows Program FAQ.

If you have questions not addressed in the FAQ, please feel welcome to reach out Rebecca Tabasky, the Berkman Klein Center's manager of community programs, at rtabasky@cyber.harvard.edu.
back to top
 

Required Application Materials

(1.) A current resume or C.V.

(2.) A personal statement that responds to the following two questions.  Each response should be between 250-500 words.

  • What is the research you propose to conduct during a fellowship year?  Please    

    • describe the problems are you trying to solve;

    • outline the methods which might inform your research; and

    • tell us about the public interest and/or the communities you aim to serve through your work.

       

  • Why is the Berkman Klein Center the right place for you to do this work?  Please share thoughts on:    

    • how the opportunity to engage colleagues from different backgrounds -- with a range of experiences and training in disciplines unfamiliar to you -- might stimulate your work;

    • which perspectives you might seek out to help you fill in underdeveloped areas of your research;

    • what kinds of topics and skills you seek to learn with the Center that are outside of your primary research focus and expertise; and

    • the skills, connections, and insights you are uniquely suited to contribute to the Center’s community and activities.

(3.) A copy of a recent publication or an example of relevant work.  For a written document, for instance, it should be on the order of a paper or chapter - not an entire book or dissertation - and should be in English.

(4.) Two letters of recommendation, sent directly from the reference.
back to top
 

Apply for a 2018-2019 Academic Year Fellowship Through Our Open Call

The application deadline is Wednesday, January 31, 2018 at 11:59 p.m. Eastern Time.


Applications will be submitted online through our Application Tracker tool at:

http://brk.mn/1819app
 

Applicants will submit their resume/C.V., their personal statement, and their work sample as uploads within the Berkman Klein Application Tracker.  Applicants should ensure that their names are included on each page of their application materials.
 

Recommendation letters will be captured through the Application Tracker, and the Application Tracker requires applicants to submit the names and contact information for references in advance of the application deadline. References will receive a link at which they can upload their letters. We recommend that applicants create their profiles and submit reference information in the Application Tracker as soon as they know they are going to apply and have identified their references - this step will not require other fellowship application materials to be submitted at that time.  We do ask that letters be received from the references by the application deadline.

Instructions for creating an account and submitting an application through the Application Tracker may be found here.
back to top

Categories: Tech-n-law-ogy

When a Bot is the Judge

Teaser

What happens when our criminal justice system uses algorithms to help judges determine bail, sentencing, and parole?

Thumbnail Image: 

Earlier this month, a group of researchers from Harvard and MIT directed an open letter to the Massachusetts Legislature to inform its consideration of risk assessment tools as part of ongoing criminal justice reform efforts in the Commonwealth. Risk assessment tools are pieces of software that courts use to assess the risk posed by a particular criminal defendant in a particular set of circumstances. Senate Bill 2185 — passed by the Massachusetts Senate on October 27, 2017 — mandates implementation of RA tools in the pretrial stage of criminal proceedings.

In this episode of the Berkman Klein Center podcast, The Platform, Managing Director of the Cyberlaw Clinic Professor Chris Bavitz discusses some of the concerns and opportunities related to the use of risk assessment tools as well as some of the related work the Berkman Klein Center is doing as part of the Ethics and Governance of AI initiative in partnership with the MIT Media Lab.

What need are risk assessment tools addressing? Why would we want to implement them?

Well, some people would say that they’re not addressing any need and ask why we would ever use a computer program when doing any assessments. But I think that there are some ways in which they’re helping to solve problems, particularly around consistency. Another potential piece of it, and this is where we start to get sort of controversial, is that the criminal justice system is very biased and has historically treated racial minorities and other members of marginalized groups poorly. A lot of that may stem from human biases that creep in anytime you have one human evaluating another human being. So there’s an argument to be made that if we can do risk scoring right and turn it into a relatively objective process, we might remove from judges the kind of discretion that leads to biased decisions.

Are we there yet? Can these tools eliminate bias like that?

My sense is that from a computer science perspective we’re not there. In general, these kinds of technologies that use machine learning are only as good as the data on which they’re trained. So if I’m trying to decide whether you’re going to come back for your hearing in six months, the only information that I have to train a risk scoring tool to give me a good prediction on that front is data about people like you who came through the criminal justice system in the past. And if we take as a given that the whole system is biased, then the data is that coming out of that system is biased. And when we feed that data to a computer program, the results are going to be biased.

And we don’t know what actually goes into these tools?

Many of the tools that are in use in states around the country are tools that are developed by private companies. So with most of the tools we do not have a very detailed breakdown of what factors are being considered, what relative weights are being given to each factor, that sort of thing. So one of the pushes for advocates in this area is that at the very least we need more transparency.

Tell me about the Open Letter to the Legislature. Why did you write it?

The Massachusetts Senate and House are in the process of considering criminal justice reform broadly speaking in Massachusetts. The Senate bill has some language in it that suggests that risk scoring tools should be adopted in the Commonwealth and that we should take steps to make sure that they’re not biased. And a number of us, most of whom are involved in the Berkman and MIT Media Lab AI Ethics and Governance efforts, signed onto this open letter to the Mass Legislature that basically said, “Look these kinds of tools may have a place in the system, but simply saying ‘Make sure they’re not biased’ is not enough. And if you’re going to go forward, here are a whole bunch of principles that we want you to adhere to,” basically trying to set up processes around both the procurement or development of the tool, the implementation of the tool, the training of the judges on how to use it and what the scores really mean and how they should fit into their legal analysis, and then ultimately the rigorous evaluation of the outcomes. Are these tools actually having the predictive value that was promised? How are we doing on the bias front? Does this seem to be generating results that are biased in statistically significant ways?

What are you hoping will happen next?

I think we would view part of our mission here at Berkman Klein as making sure that this is the subject of vigorous debate. Informed debate, to be clear, because I think that sometimes the debate about this devolves into either that technology is going to solve all our problems, or it’s a dystopian future with robotic judges that are going to sentence us to death, and I don’t think it’s either of those things. Having this conversation in a way that is nuanced and responsible will be really difficult, but I think it’s something we absolutely have to do.

This initiative at Berkman Klein and MIT is the Ethics and Governance of Artificial Intelligence Initiative, but there’s nothing about anything we’ve talked about here that really has to do with artificial intelligence where the computer program is learning and evolving and changing and adapting over time. But that’s coming. And the more we get used to these kinds of systems working in the criminal justice system and spitting out risk scores that judges take into account, the more comfortable we’re going to be as the computing power increases and the autonomy of these programs increases.

I don’t mean to be too dystopic about it and say that bad stuff is coming, but it’s only a matter of time. It’s happening in our cars, and it’s happening in our news feeds on social media sites. More and more decisions are being made by algorithms. And anytime we get a technological intervention in a system like this, particularly where people’s freedom is at stake, I think we want to tread really carefully, recognizing that the next iteration of this technology is going to be more extensive, and raise even more challenging questions.


Subscribe to us on Soundcloud
iTunes
or RSS

Categories: Tech-n-law-ogy

A Pessimist’s Guide to the Future of Technology

Subtitle featuring Dr. Ian Bogost, Professor of Interactive Computing at the Georgia Institute of Technology, in conversation with Professor Jeffrey Schnapp, Professor of Romance Languages & Literature, Harvard Graduate School of Design Teaser

Two decades of technological optimism in computing have proven foolhardy. Let’s talk about new ways to anticipate what might go right and wrong, using a technology that has not yet mainstreamed—autonomous vehicles—as a test case.

Parent Event Berkman Klein Luncheon Series Event Date Dec 12 2017 12:00pm to Dec 12 2017 12:00pm Thumbnail Image: 

Tuesday, December 12, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Milstein East C (HLS campus map)
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Since the rise of the web in the 1990s, technological skeptics have always faced resistance. To question the virtue and righteousness of tech, and especially computing, was seen as truculence, ignorance, or luddism. But today, the real downsides of tech, from fake news to data breaches to AI-operated courtrooms to energy-sucking bitcoin mines, have become both undeniable and somewhat obvious in retrospect.

In light of this new technological realism, perhaps there is appetite for new ways to think about and plan for the future of technology, which anticipates what might go right and wrong once unproven tech mainstreams quickly. As a test case, this talk will consider a technology that has not yet mainstreamed—autonomous vehicles—as a test case.

About Ian

Dr. Ian Bogost is an author and an award-winning game designer. He is Ivan Allen College Distinguished Chair in Media Studies and Professor of Interactive Computing at the Georgia Institute of Technology, where he also holds an appointment in the Scheller College of Business. Bogost is also Founding Partner at Persuasive Games LLC, an independent game studio, and a Contributing Editor at The Atlantic. He is the author or co-author of ten books including Unit Operations: An Approach to Videogame Criticism and Persuasive Games: The Expressive Power of Videogames.

Bogost is also the co-editor of the Platform Studies book series at MIT Press, and the Object Lessons book and essay series, published by The Atlantic and Bloomsbury.

Bogost’s videogames about social and political issues cover topics as varied as airport security, consumer debt, disaffected workers, the petroleum industry, suburban errands, pandemic flu, and tort reform. His games have been played by millions of people and exhibited or held in collections internationally, at venues including the Smithsonian American Art Museum, the Telfair Museum of Art, The San Francisco Museum of Modern Art, The Museum of Contemporary Art, Jacksonville, the Laboral Centro de Arte, and The Australian Centre for the Moving Image.

His independent games include Cow Clicker, a Facebook game send-up of Facebook games that was the subject of Wired magazine feature, and A Slow Year, a collection of videogame poems for Atari VCS, Windows, and Mac, which won the Vanguard and Virtuoso awards at the 2010 IndieCade Festival.

Bogost holds a Bachelors degree in Philosophy and Comparative Literature from the University of Southern California, and a Masters and Ph.D. in Comparative Literature from UCLA. He lives in Atlanta.

About Jeffrey

Jeffrey is Professor of Romance Languages & Literature, Harvard Graduate School of Design; Director, metaLAB (at) Harvard; and Director, Berkman Klein Center for Internet & Society. A cultural historian with research interests extending from Roman antiquity to the present, his most recent books are The Electric Information Age Book (a collaboration with the designer Adam Michaels (Princeton Architectural Press, 2012) and Italiamerica II (Il Saggiatore, 2012). His pioneering work in the domains of digital humanities and digitally augmented approaches to cultural programming includes curatorial collaborations with the Triennale di Milano, the Cantor Center for the Visual Arts, the Wolfsonian-FIU, and the Canadian Center for Architecture. His Trento Tunnels project — a 6000 sq. meter pair of highway tunnels in Northern Italy repurposed as a history museum– was featured in the Italian pavilion of the 2010 Venice Biennale and at the MAXXI in Rome in RE-CYCLE - Strategie per la casa la città e il pianeta (fall-winter 2011). He is Professor of Romance Languages & Literature, on the teaching faculty of Harvard’s Graduate School of Design,and is the faculty director of metaLAB (at) Harvard.

Links to selected writing  over the last year or so that are relevant:

Loading...

Categories: Tech-n-law-ogy

Charting a Roadmap to Ensure AI Benefits All

Teaser

An international symposium aimed at building capacity and exploring ideas for data democratization and inclusion in the age of AI.

Thumbnail Image: 

AI-based technologies — and the vast datasets that power them — are reshaping a broad range of sectors of the economy and are increasingly affecting the ways in which we live our lives. But to date these systems remain largely the province of a few large companies and powerful nations, raising concerns over how they might exacerbate inequalities and perpetuate bias against underserved and underrepresented populations.

In early November, on behalf of a global group of Internet research centers known as the Global Network of Internet & Society Centers (NoC) , the Institute for Technology & Society of Rio de Janeiro and the Berkman Klein Center for Internet & Society at Harvard University co-organized a three-day symposium on these topics in Brazil. The event brought together representatives from academia, advocacy groups, philanthropies, media, policy, and industry from more than 20 nations to start identifying and implementing ways to make the class of technologies broadly termed “AI” more inclusive.

The symposium — attended by about 170 people from countries including Nigeria, Uganda, South Africa, Kenya, Egypt, India, Japan, Turkey, and numerous Latin American and European nations — was intended to build collaborative partnerships and identify research questions as well as action items. These may include efforts to draft a human rights or regulatory framework for AI; define ways to democratize data access and audit algorithms and review their effects; and commit to designing and deploying AI that incorporates the perspectives of traditionally underserved and underrepresented groups, which include urban and rural poor communities, women, youth, LGBTQ individuals, ethnic and racial groups, and people with disabilities.

Read more about this event on our Medium post

Categories: Tech-n-law-ogy

A Layered Model for AI Governance

Teaser

​AI-based systems are “black boxes,” resulting in massive information asymmetries between the developers of such systems and consumers and policymakers. In order to bridge this information gap, this article proposes a conceptual framework for thinking about governance for AI.

Publication Date 20 Nov 2017 Author(s) External Links: Download from DASHDownload from IEEE Internet Computing

Abstract
AI-based systems are “black boxes,” resulting in massive information asymmetries between the developers of such systems and consumers and policymakers. In order to bridge this information gap, this article proposes a conceptual framework for thinking about governance for AI.

Many sectors of society rapidly adopt digital technologies and big data, resulting in the quiet and often seamless integration of AI, autonomous systems, and algorithmic decision-making into billions of human lives[1][2]. AI and algorithmic systems already guide a vast array of decisions in both private and public sectors. For example, private global platforms, such as Google and Facebook, use AIbased filtering algorithms to control access to information. AI algorithms that control self-driving cars must decide on how to weigh the safety of passengers and pedestrians[3]. Various applications, including security and safety decisionmaking systems, rely heavily on A-based face recognition algorithms. And a recent study from Stanford University describes an AI algorithm that can deduce the sexuality of people on a dating site with up to 91 percent accuracy[4]. Voicing alarm at the capabilities of AI evidenced within this study, and as AI technologies move toward broader adoption, some voices in society have expressed concern about the unintended consequences and potential downsides of widespread use of these technologies.

To ensure transparency, accountability, and explainability for the AI ecosystem, our governments, civil society, the private sector, and academia must be at the table to discuss governance mechanisms that minimize the risks and possible downsides of AI and autonomous systems while harnessing the full potential of this technology[5]. Yet the process of designing a governance ecosystem for AI, autonomous systems, and algorithms is complex for several reasons. As researchers at the University of Oxford point out,3 separate regulation solutions for decision-making algorithms, AI, and robotics could misinterpret legal and ethical challenges as unrelated, which is no longer accurate in today’s systems. Algorithms, hardware, software, and data are always part of AI and autonomous systems. To regulate ahead of time is dicult for any kind of industry. Although AI technologies are evolving rapidly, they are still in the development stages. A global AI governance system must be flexible enough to accommodate cultural dierences and bridge gaps across dierent national legal systems. While there are many approaches we can take to design a governance structure for AI, one option is to take inspiration from the development and evolution of governance structures that act on the Internet environment. Thus, here we discuss dierent issues associated with governance of AI systems, and introduce a conceptual framework for thinking about governance for AI, autonomous systems, and algorithmic decision-making processes.

Producer Intro Authored by
Categories: Tech-n-law-ogy

Accountability of AI Under the Law: The Role of Explanation

Teaser

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under the law. It ultimately finds that, at least for now, AI systems can and should be held to a similar standard of explanation as humans currently are.

Publication Date 27 Nov 2017 Thumbnail Image: External Links: Download from SSRNDownload from DASHDownload from arXiv.org

by Finale Doshi-Velez and Mason Kortz

for the Berkman Klein Center Working Group on Explanation and the Law:
Chris Bavitz, Harvard Law School; Berkman Klein Center for Internet & Society at Harvard University 
Ryan Budish, Berkman Klein Center for Internet & Society at Harvard University
Finale Doshi-Velez, John A. Paulson School of Engineering and Applied Sciences, Harvard University
Sam Gershman, Department of Psychology and Center for Brain Science, Harvard University
Mason Kortz, Harvard Law School Cyberlaw Clinic
David O'Brien, Berkman Klein Center for Internet & Society at Harvard University
Stuart Shieber, John A. Paulson School of Engineering and Applied Sciences, Harvard University
James Waldo, John A. Paulson School of Engineering and Applied Sciences, Harvard University
David Weinberger, Berkman Klein Center for Internet & Society at Harvard University
Alexandra Wood, Berkman Klein Center for Internet & Society at Harvard University

Abstract

The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before|applications range from clinical decision support to autonomous driving and predictive policing. That said, common sense reasoning [McCarthy, 1960] remains one of the holy grails of AI, and there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014].

There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exists important consistencies: when demanding explanation from humans, what we typically want to know is how and whether certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous|there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.

The authors have invited researchers, technologists, and policy makers to engage with the ideas outlined in the paper by emailing mkortz@cyber.harvard.edu and finale@seas.harvard.edu. For questions and comments related to broader AI themes and/or related activities of the Ethics and Governance of Artificial Intelligence Initiative, please email ai-questions@cyber.harvard.edu

Producer Intro Authored by
Categories: Tech-n-law-ogy

Designing Artificial Intelligence to Explain Itself

Subtitle A new working paper maps out critical starting points for thinking about explanation in AI systems. Teaser

As we integrate artificial intelligence deeper into our daily technologies, it becomes important to ask “why” not just of people, but of systems. A new working paper from the Berkman Klein Center at Harvard University and the MIT Media Lab maps out critical starting points for thinking about explanation in AI systems. 

Thumbnail Image: 

“Why did you do that?” The right to ask that deceptively simple question and expect an answer creates a social dynamic of interpersonal accountability. Accountability, in turn, is the foundation of many important social institutions, from personal and professional trust to legal liability to governmental legitimacy and beyond.

As we integrate artificial intelligence deeper into our daily technologies, it becomes important to ask “why” not just of people, but of systems. However, human and artificial intelligences are not interchangeable. Designing an AI system to provide accurate, meaningful, human-readable explanations presents practical challenges, and our responses to those challenges may have far-reaching consequences. Setting guidelines for AI-generated explanations today will help us understand and manage increasingly complex systems in the future.

In response to these emerging questions, a new working paper from the Berkman Klein Center at Harvard University and the MIT Media Lab maps out critical starting points for thinking about explanation in AI systems. “Accountability of AI Under the Law: The Role of Explanation” is now available to scholars, policy makers, and the public.

“If we’re going to take advantage of all that AIs have to offer, we’re going to have to find ways to hold them accountable,” said Finale Doshi-Velez of Harvard’s John A. Paulson School of Engineering and Applied Sciences, “Explanation is one tool toward that end.  We see a complex balance of costs and benefits, social norms, and more. To ground our discussion in concrete terms, we looked to ways that explanation currently functions in law.”

Doshi-Velez and Mason Kortz of the Berkman Klein Center and Harvard Law School Cyberlaw Clinic are lead authors of the paper, which is the product of an extensive collaboration within the Ethics and Governance of Artificial Intelligence Initiative, now underway at Harvard and MIT.

“An explanation, as we use the term in this paper, is a reason or justification for a specific decision made by an AI system--how a particular set of inputs lead to a particular outcome,” said Kortz. “A helpful explanation will tell you something about this process, such as the degree to which an input influenced the outcome, whether changing a certain factor would have changed the decision, or why two similar-looking cases turned out differently.”

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under the law. It ultimately finds that, at least for now, AI systems can and should be held to a similar standard of explanation as humans currently are.

“It won’t necessarily be easy to produce explanations from complex AI systems that are processing enormous amounts of data,” Kortz added. “Humans are naturally able to describe our internal processes in terms of cause and effect, although not always with great accuracy. AIs, on the other hand, will have to be intentionally designed with the capacity to generate explanations in mind. This paper is the starting point for a series of discussions that will be increasingly important in the years ahead. We’re hoping this generates some constructive feedback from inside and outside the Initiative.”

Guided by the Berkman Klein Center at Harvard and the MIT Media Lab, the Ethics and Governance of Artificial Intelligence Initiative aims to foster global conversations among scholars, experts, advocates, and leaders from a range of industries. By developing a shared framework to address urgent questions surrounding AI, the Initiative aims to help public and private decision-makers understand and plan for the effective use of AI systems for the public good. More information at: https://cyber.harvard.edu/research/ai

Categories: Tech-n-law-ogy

A Legal Anatomy of AI-generated Art: Part I

Teaser

This Comment, published in the JOLT Digest, is the first in a two-part series on how lawyers should think about art generated by artificial intelligences, particularly with regard to copyright law. This first part charts the anatomy of the AI-assisted artistic process. ​

Thumbnail Image: 

This Comment by Jessica Fjeld and Mason Kortz originally published in the Journal of Law and Technology's online Digest, is the first in a two-part series on how lawyers should think about art generated by artificial intelligences, particularly with regard to copyright law. This first part charts the anatomy of the AI-assisted artistic process. The second Comment in the series examine how copyright interests in these elements interact and provide practice tips for lawyers drafting license agreements or involved in disputes around AI-generated artwork.

Advanced algorithms that display cognition-like processes, popularly called artificial intelligences or “AIs,” are capable of generating sophisticated and provocative works of art.[1] These technologies differ from widely-used digital creation and editing tools in that they are capable of developing complex decision-making processes, leading to unexpected outcomes. Generative AI systems and the artwork they produce raise mind-bending questions of ownership, from broad policy concerns[2] to the individual interests of the artists, engineers, and researchers undertaking this work. Attorneys, too, are beginning to get involved, called on by their clients to draft licenses or manage disputes.

The Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society has recently developed a practice in advising clients in the emerging field at the intersection of art and AI. We have seen for ourselves how attempts to negotiate licenses or settle disputes without a common understanding of the systems involved may result in vague and poorly understood agreements, and worse, unnecessary conflict between parties. More often than not, this friction arises between reasonable parties who are open to compromise, but suffer from a lack of clarity over what, exactly, is being negotiated. In the course of solving such problems, we have dissected generative AIs and studied their elements from a legal perspective. The result is an anatomy that forms the foundation of our thinking—and our practice—on the subject of AI-generated art. When the parties to an agreement or dispute share a common vocabulary and understanding of the nature of the work, many areas of potential conflict evaporate.

Read the full comment at JOLTdigest.

Categories: Tech-n-law-ogy

Apply for a Spot in CopyrightX 2018

Teaser

CopyrightX is a networked course that explores the current law of copyright; the impact of that law on art, entertainment, and industry; and the ongoing debates concerning how the law should be reformed. 

Thumbnail Image: 

The application for the CopyrightX online sections will be open from Oct. 16 - Dec. 13. See CopyrightX:Sections for details.

CopyrightX is a networked course that explores the current law of copyright; the impact of that law on art, entertainment, and industry; and the ongoing debates concerning how the law should be reformed. Through a combination of recorded lectures, assigned readings, weekly seminars, and live interactive webcasts, participants in the course examine and assess the ways in which the copyright system seeks to stimulate and regulate creative expression.

In 2013, HarvardX, Harvard Law School, and the Berkman Klein Center for Internet & Society launched an experiment in distance education: CopyrightX, the first free and open distance learning course on law. After five successful offerings, CopyrightX is an experiment no longer. Under the leadership of Professor William Fisher, who created and directs the course, CopyrightX will be offered for a sixth time from January to May 2018. 

Three types of courses make up the CopyrightX Community:
•    a residential course on Copyright Law, taught by Prof. Fisher to approximately 100 Harvard Law School students;
•    an online course divided into sections of 25 students, each section taught by a Harvard Teaching Fellow;
•    a set of affiliated courses based at educational institutions worldwide, each taught by an expert in copyright law.

Participation in the 2018 online sections is free and is open to anyone at least 13 years of age, but enrollment is limited. Admission to the online sections will be administered through an open application process that ends on December 13, 2017. We welcome applicants from all countries, as well as lawyers and non-lawyers alike. To request an application, visit http://brk.mn/applycx18. For more details, see CopyrightX:Sections. (The criteria for admission to each of the affiliated courses are set by the course’s instructor. Students who will enroll in the affiliated courses may not apply to the online sections.)

We encourage widespread promotion of the application through personal and professional networks and social media. Feel free to circulate: 
•    this blog post 
•    the application page 

Categories: Tech-n-law-ogy

An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System

Teaser

This open letter — signed by Harvard and MIT-based faculty, staff, and researchers— is directed to the Massachusetts Legislature to inform its consideration of risk assessment tools as part of ongoing criminal justice reform efforts in the Commonwealth.

Publication Date 9 Nov 2017 External Links: Download from DASHRead the letter on Medium

This open letter — signed by Harvard and MIT-based faculty, staff, and researchers Chelsea Barabas, Christopher Bavitz, Ryan Budish, Karthik Dinakar, Cynthia Dwork, Urs Gasser, Kira Hessekiel, Joichi Ito, Ronald L. Rivest, Madars Virza, and Jonathan Zittrain — is directed to the Massachusetts Legislature to inform its consideration of risk assessment tools as part of ongoing criminal justice reform efforts in the Commonwealth.

 

Producer Intro Authored by
Categories: Tech-n-law-ogy

Plain Text: The Poetics of Computation

Subtitle featuring Dennis Tenen, Assistant Professor of English and Comparative Literature at Columbia University Teaser

Computers—from electronic books to smart phones—play an active role in our social lives. Our technological choices thus entail theoretical and political commitments. Dennis Tenen takes up today's strange enmeshing of humans, texts, and machines to argue that our most ingrained intuitions about texts are profoundly alienated from the physical contexts of their intellectual production.

Event Date Nov 28 2017 12:00pm to Nov 28 2017 12:00pm Thumbnail Image: 

Tuesday, November 28, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Milstein East C, Room 2036 (HLS campus map)
RSVP required to attend in person

Watch Live Starting at 12pm

If you experience a video disruption reload to refresh the webcast.

We are pleased to welcome back Berkman Klein Fellow alumnus, Dennis Tenen, who joins us to discuss his new book, Plain Text: The Poetics of Computation (Stanford UP, 2017).

This book challenges the ways we read, write, store, and retrieve information in the digital age. Computers—from electronic books to smart phones—play an active role in our social lives. Our technological choices thus entail theoretical and political commitments. Dennis Tenen takes up today's strange enmeshing of humans, texts, and machines to argue that our most ingrained intuitions about texts are profoundly alienated from the physical contexts of their intellectual production. Drawing on a range of primary sources from both literary theory and software engineering, he makes a case for a more transparent practice of human–computer interaction. Plain Text is thus a rallying call, a frame of mind as much as a file format. It reminds us, ultimately, that our devices also encode specific modes of governance and control that must remain available to interpretation.

 

Dennis's Biography:

Dennis Tenen's research happens at the intersection of people, texts, and technology.

His recent work appears on the pages of Amodern, boundary 2, Computational Culture, Modernism/modernity, New Literary History, Public Books, and LA Review of Books on topics that range from book piracy to algorithmic composition, unintelligent design, and history of data visualization.

He teaches a variety of classes in fields of literary theory, new media studies, and critical computing in the humanities.

Tenen is a co-founder of Columbia's Group for Experimental Methods in the Humanities and author of Plain Text: The Poetics of Computation (Stanford UP, 2017).

For an updated list of projects, talks, and publications please visit dennistenen.com.

 

Loading...

Categories: Tech-n-law-ogy

Harvard Open Access Project Part-Time Research Assistant Opportunity

The Harvard Open Access Project (HOAP) at the Berkman Klein Center for Internet & Society is hiring a part-time research assistant!

The Harvard Open Access Project (HOAP) fosters open access to research, within Harvard and beyond, using a combination of education, consultation, collaboration, research, tool-building, and direct assistance. HOAP is a project within the Berkman Klein Center for Internet & Society at Harvard University. For more detail, see the project home page at http://cyber.harvard.edu/hoap.

The Research Assistant will contribute to the Open Access Tracking Project (OATP), using the TagTeam social-tagging platform, contribute to the Open Access Directory (OAD), and perform occasional research, help with grant reporting, and strategize about open access inside and outside Harvard University. The position offers remote work options, flexible scheduling, and community work spaces at the Berkman Klein Center for Internet & Society.

The position will remain open until the job is filled, and plan to begin reviewing applicants as soon as possible.

Work Requirements/Benefits Information:

This part-time position is 17.25 hours per week.  The pay is at a rate of $11.50+ per hour, with the possibility of more to suit qualifications and experience. This position does not include benefits. The role will include the expectation of regular weekend work as needed to support time-sensitive projects (approximately 2 - 4 of total 17.25). The Research Assistant must be based in Massachusetts.  The work may be done remotely, but will include regular face-to-face meetings in Cambridge, Massachusetts to review progress and discuss new ideas.  Unfortunately we are not able to sponsor a visa for this position. This position is approved through the end of August, 2018.

To Apply:

Please send your current CV or resume and a cover letter summarizing your interest and experience to Peter Suber at psuber@cyber.law.harvard.edu with “HOAP application” in the subject line.

 

via GIPHY

Categories: Tech-n-law-ogy

#FellowFriday! Get to know the 2017-2018 Fellows

This series of short video interviews highlights the new 2017-2018 Berkman Klein fellows. Check back evey Friday for new additions!

published October 27, 2017

Tell us about a research question you're excited to address this year and why it matters to you.
This year I'm really trying to understand how communication on social media leads to offline violence. So I'm studying a Twitter dataset of young people in Chicago to better understand how things like grief and trauma and love and happiness all play out on Twitter and the relationship between that communication and offline gun violence. 

I started my research process in Chicago and I have been just completely troubled by the amount of violence that happens in the city. And one of the ways in which that violence happens or occurs is through social media communication. And so I want to be a part of the process of ending violence through learning how young people communicate online.  

***

published October 27, 2017

Tell us about a research question you're excited to address this year and why it matters to you.
I’m working on the ethics and governance of artificial intelligence project, here at Berkman Klein. There are a lot of questions as to how exactly incorporating this new technology into different social environments is really going to affect people, and I think one of the most important things is getting people’s perspectives who are actually going to be impacted. So, I’m looking forward to participating in some early educational initiatives and some discussions that we can post online in blog posts and things, to help people feel like they’re more familiar with this subject and more comfortable, because it can be really intimidating.

Why should people care about this issue?
Right now, this technology or early versions of machine learning and artificial intelligence applications are being used in institutions ranging from the judicial system, to financial institutions, and they’re really going to impact everyone. I think it’s important for people to talk about how they’re being implemented and what the consequences of that are for them, and that we should have an open discussion, and that people can’t do that if they’re unfamiliar with the technology or why it’s being employed. I think that everyone needs to have at least a basic familiarity with these things because in ten years there’s not going to be an institution that doesn’t use it in some way.

How did you become interested in this topic?
I grew up in a pretty low income community that didn’t have a lot of access to these technologies initially, and so I was very new to even using a computer when I got into college. It’s something that was hard for me initially, but that I started really getting interested in, partially because I’m a huge sci-fi fan now, and so I think that sci-fi and fiction really opens up your eyes to both the opportunities and the potential costs of using different advanced technologies. I wanted to be part of the conversation about how we would actually approach a future where these things were possible and to make sure that we would use them in a way that would benefit us and not this scarier, more dystopian views of what could happen.

What excites you most about technology and its potential impact on our world?
Software, so scalable, that we can offer more resources and more information to so many more people at a lower cost. We’re also at a time where we have so much more information than we’ve ever had in history, so things like machine learning and artificial intelligence can really help to open up the answers that we can get from all of that data and maybe some very non-intuitive answers that people just have not been able to find themselves.

What scares you most?
I think that the thing that scares me most is that artificial intelligence software is going to be employed in institutions and around populations that don’t understand both ends of the things it has to offer, but also its limitations. It will just be taken as objective fact or a scientific opinion that you can’t question, when it’s important to realize that this is something that is crafted by humans, that can be fallible, that can be employed in different ways and have different outcomes. I think my biggest fear is that we won’t question it and that these things will just be able to be deployed without having any kind of public dialogue or pushback if it has negative consequences.

 

 

Categories: Tech-n-law-ogy

The Slippery Slope of Internet Censorship in Egypt

Teaser

Explaining the recent dramatic increase in Internet censorship in Egypt, examining the Twitter conversation around website blocking in Egypt, and identifying ways that users disseminate banned content

Thumbnail Image: 

The first Internet Monitor research bulletin summarizes the recent, dramatic increase in Internet censorship in Egypt, examines the Twitter conversation around website blocking in Egypt, and identifies ways that users disseminate banned content.

Internet filtering in Egypt illustrates how censorship can be a slippery slope. After an extended period of open access to the Internet in Egypt lasting several years following the January 2011 revolution, the government dramatically increased its censorship of political content between December 2015 and September 2017. What started with the filtering of one regional news website in 2015 has led to the filtering of over 400 websites by October 2017. The blocked websites include local and regional news and human rights websites, websites based in or affiliated with Qatar, and websites of Internet privacy and circumvention tools. This bulletin examines how Egyptian Internet users have reacted to the pervasive blocking and describes their efforts to counter the censorship. These efforts center on disseminating banned content through platforms protected by encrypted HTTPS connections such as Facebook and Google Drive, which makes individual objectionable URLs challenging for the censors to block. 

Read the complete bulletin on the Internet Monitor website.

Categories: Tech-n-law-ogy

Badges of Oppression, Positions of Strength: Digital Black Feminist Discourse and the Legacy of Black Women’s Technology Use

Subtitle featuring Catherine Knight Steele, University of Maryland Teaser

The use of online technology by black feminist thinkers has changed the principles, praxis, and product of black feminist writing and simultaneously has changed the technologies themselves. Texts from the antebellum south through the 20th-century contextualize the contemporary relationship between black women and digital media.

Parent Event Berkman Klein Luncheon Series Event Date Nov 21 2017 12:00pm to Nov 21 2017 12:00pm Thumbnail Image: 

Tuesday, November 21, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
*VENUE CHANGE* Wasserstein Hall, Room 3019 (HLS campus map)
RSVP required to attend in person
Event will be live webcast at 12:00 pm.

Black women have historically occupied a unique position, existing in multiple worlds, manipulating multiple technologies, and maximizing their resources for survival in a system created to keep them from thriving. I present a case for the unique development of black women’s relationship with technology by analyzing historical texts that explore the creation of black womanhood in contrast to white womanhood and black manhood in early colonial and antebellum periods in the U.S. This study of Black feminist discourse online situates current practices in the context of historical use and mastery of communicative technology by the black community broadly and black women more specifically. By tracing the history of black feminist thinkers in relationship to technology we move from a deficiency model of black women’s use of technology to recognizing their digital skills and internet use as part of a long developed expertise. 

About Catherine

Catherine Knight Steele is an Assistant Professor of Communication at the University of Maryland - College Park and the Director of the Andrew W. Mellon funded African American Digital Humanities Initiative (AADHum). As the director of the AADHum, Dr. Steele works to foster a new generation of scholars and scholarship at the intersection of African American Studies and Digital Humanities and Digital Studies. She earned her Ph.D. in Communication from the University of Illinois at Chicago. Her research focuses on race, gender, and media with a specific focus on African American culture and discourse in traditional and new media. She examines representations of marginalized communities in the media and how traditionally marginalized populations resist oppression and utilize online technology to create spaces of community. Dr. Steele has published in new media journals such as Social Media & Society and Television & New Media; and the edited volumes Intersectional Internet (Ed. S. Noble & B. Tynes) and the upcoming edited collection A Networked Self: Birth, Life, Death (Ed. Z. Papacharissi). She is currently working on a book manuscript about Digital Black Feminism. 

Links

 

Loading...

Categories: Tech-n-law-ogy

Black Users, Enclaving, and Methodological Challenges in a Shifting Digital Landscape

Subtitle featuring Sarah Florini, Assistant Professor of Film and Media Studies, Department of English Arizona State University Teaser

Researchers often consider the technological practices of Black Americans for insight into race and cultural production. But, Black users are regularly at the digital vanguard, anticipating shifts in the media landscape that raise methodological and ethical questions for researchers.

Parent Event Berkman Klein Luncheon Series Event Date Dec 5 2017 12:00pm to Dec 5 2017 12:00pm Thumbnail Image: 

Tuesday, December 5, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein Hall, Room 1010 (HLS campus map)
RSVP required to attend in person
Event will be live webcast at 12:00 pm

Black users have consistently been at the vanguard of digital and social media use, pioneering and anticipating digital trends including live tweeting and the podcast boom. As harassment on social media platforms becomes increasingly aggressive, and increasingly automated, users must develop strategies for navigating this hostility. Having long endured coordinated campaigns of harassment, Black users are again at the forefront of a shift in digital practices – the creation of digital enclaves. With new patterns of use, digital media researchers are faced with new, and a few old, methodological and ethical questions.

About Sarah

Sarah Florini is an Assistant Professor of Film and Media Studies in the Department of English at Arizona State University. She earned a PhD in Communication and Culture from Indiana University. Her research focuses on the intersection of emerging media, Black American cultural production, and racial politics in the post-Civil Rights Movement landscape.

Links

Aaron Edwards, “Long Live the Group Chat.” The Outline. September 27, 2017

 

Loading...

Categories: Tech-n-law-ogy

What should the course catalog look like in the 21st century? Leveraging data and design for course selection and discovery

Subtitle Curricle with Professor Jeffrey Schnapp, metaLAB Harvard Teaser

Visualized, annotated, connected: what should the course catalog look like in the 21st century? In this ​participatory lunch talk, members of metaLAB's Curricle team will share details of the new platform they're building for course-selection and discovery—and invite participants to help design and refine the system.

Parent Event Berkman Klein Luncheon Series Event Date Nov 7 2017 12:00pm to Nov 7 2017 12:00pm Thumbnail Image: 

Tuesday, November 7, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard College campus
Lamont Library, Harvard Yard (Map & Directions)
RSVP required to attend in person & Photo ID Required at event

Event is not being webcast. Audio and video will be available shortly after event.

Curricle will offer a new experience in course selection at Harvard: a platform that gives students powerful tools in data visualization and analytics for browsing and selecting courses. The platform will enable students to see the broader landscape within which they navigate the curriculum, offering more opportunities for choice and customization. Additionally it will offer opportunities for students and scholars to see trends in Harvard’s curriculum over time.  

The usual course-selection process has blind spots where life-changing courses can lurk undiscovered. And especially in a post-disciplinary era, finding ways to identify links currents among courses across departments—to chart, visualize, and connect far-flung parts of the curriculum—will allow students to forge new and productive paths. metaLAB’s team of designers and scholars will be offering an interactive lunch to preview Curricle and offer opportunities for engagement, reflection, and comprehensive rethinking of the course-selection experience.

About metaLAB

metaLAB (at) Harvard, led by Professor Jeffrey Schnapp (RLL, GSD), and headquartered at the Berkman Klein Center, is a creative research team exploring new roles for media and technology in the arts and humanities. The group's project-based research takes many forms, from museum and gallery installations to books, websites, and interventions in virtual and real space.

About Professor Jeffrey Schnapp

Before moving to Harvard in 2011, Jeffrey Schnapp occupied the Pierotti Chair of Italian Studies at Stanford, where he founded and led the Stanford Humanities Lab between 1999 and 2009. A cultural historian, designer, and curator with research interests extending from antiquity to the present, his most recent books include The Electric Information Age Book, Modernitalia, Digital_Humanities, and The Library Beyond the Book. At Harvard he occupies the Carl A. Pescosolido Chair in Romance and Comparative Literatures, while also serving as a faculty member of the Architecture department at the Graduate School of Design and as one of the faculty co-directors of the Berkman Klein Center for Internet and Society. For more information, go to jeffreyschnapp.com.

 

Loading...

Categories: Tech-n-law-ogy

National Security, Privacy, and the Rule of Law

Subtitle A live webcast featuring Alex Abdo, Cindy Cohn, Alexander MacGillivray, Andrew McLaughlin, Matt Olsen, Daphna Renan, David Sanger, Bruce Schneier, Elliot Schrage, and Jeffrey Toobin in conversation with Professor Jonathan Zittrain Event Date Oct 27 2017 11:00am to Oct 27 2017 11:00am

Friday, October 27, 2017 
Live webcast at 11-12:30pm at http://200.hls.harvard.edu/natsec

While civil libertarians and conventional national security advocates have typically found little to agree on, today they share a profound anxiety about the trajectory of state intelligence gathering. For some, this reflects concern about invasions of privacy made possible by a digital environment in which every click and inquiry can be tracked and where our homes and workplaces have welcomed internet-aware appliances that could be repurposed for surveillance.

For others, there is a sense of undue empowerment of those who wish to cause harm and disruption, thanks to technologies that permit untraceable communications and cultivation and rallying of like-minded extremists.

Through a concrete hypothetical--ripped from tomorrow's headlines, if not today's--we will explore the difficult decisions to be made around these issues, including actors from business, government, civil society, and the citizenry at large.

This event is part of Harvard Law's School's bicentennial activities.

Categories: Tech-n-law-ogy

How Facebook Tries to Regulate Postings Made by Two Billion People

Subtitle Berkman Klein Center hosts a day of conversation about reducing harmful speech online and hears from the Facebook executive in charge of platform moderation policies

On September 19, the Berkman Klein Center for Internet & Society hosted a public lunch talk with Monika Bickert, the Head of Global Policy Management at Facebook. The public event was followed by a meeting at which members of the Berkman Klein Center community explored broader research questions and topics related to the challenges of keeping tabs on the daily social media interactions of hundreds of millions of people.

The day was hosted by the Center’s Harmful Speech Online Project. Questions surrounding the algorithmic management of online content, and how those processes impact media and information quality, are also a core focus of the center’s Ethics and Governance of AI Initiative.

Later in the afternoon, members of the Berkman Klein Center community came together for additional discussion about content moderation broadly, and related questions of hate speech and online harassment. About 80 community members: librarians, technologists, policy researchers, lawyers, students, and academics from a wide range of disciplines attended.

The afternoon included presentations about specific challenges in content moderation by Desmond Patton, Assistant Professor of Social Work at Columbia University and Fellow at the Berkman Klein Center, and Jenny Korn, an activist-scholar and doctoral candidate at the University of Illinois at Chicago and a Berkman Klein Fellow.

These researchers explained just how difficult it can be to moderate content when the language and symbols used to convey hate or violent threats evolve in highly idiosyncratic and context-dependent ways.

Read the full event summary on Medium.

Categories: Tech-n-law-ogy

Safe Spaces, Brave Spaces

Subtitle with author John Palfrey, Head of School at Phillips Academy, Andover Teaser

Often in today’s political climate our commitments to liberty and equality are set at odds with one another. This tension is nowhere more evident than when we pit free expression against our goals for a diverse, equitable, and inclusive society. This book explores these tensions and seeks ways to make progress toward shared goals, for campuses and societies alike.

Event Date Oct 24 2017 5:00pm to Oct 24 2017 5:00pm Thumbnail Image: 

 

Tuesday, October 24, 2017
5:00-6:15 pm Book Talk, followed by 6:30-7:30 pm Reception
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus
Wasserstein 1023 (HLS campus map)
RSVP required to attend in person


Can diversity and free expression co-exist on our campuses?  How about in our town squares, our cities, and our world?  Join us for a discussion of two of the foundational values of our democracy in the digital age.

About John

John is the Head of School at Phillips Academy, Andover.  He serves as Chair of the Board of Trustees of the Knight Foundation and LRNG.  He also serves as a Board member of the Data + Society Research InstituteSchool Year Abroad, and the Berkman Klein Center for Internet & Society at Harvard University.

John’s research and teaching focus on new media and learning.  He has written extensively on Internet law, intellectual property, and the potential of new technologies to strengthen democracies locally and around the world.  He is the author or co-author of several books, including Born Digital: How Children Grow Up in a Digital Age (Basic Books, revised edition, 2016) (with Urs Gasser); BiblioTech: Why Libraries Matter More Than Ever in the Age of Google (Basic Books, 2015); Interop: The Promise and Perils of Highly Interconnected Systems (Basic Books, 2012) (with Urs Gasser); Intellectual Property Strategy (MIT Press, 2012); (with Urs Gasser); and Access Denied: The Practice and Politics of Global Internet Filtering (MIT Press, 2008) (co-edited).

John served previously as the Henry N. Ess III Professor of Law and Vice Dean for Library and Information Resources at Harvard Law School.  At the Berkman Klein Center for Internet & Society, he served as executive director from 2002-2008 and has continued on as a faculty director since then. John came back to the Harvard Law School from the law firm Ropes & Gray, where he worked on intellectual property, Internet law, and private equity transactions. He also served as a Special Assistant at the U.S. Environmental Protection Agency during the Clinton administration.  He previously served as the founding President of the Board of Directors of the Digital Public Library of America.  He also served as a venture executive at Highland Capital Partners and on the Board of Directors of the Mass2020 Foundation, the Ames Foundation, and Open Knowledge Commons, among others.  John was a Visiting Professor of Information Law and Policy at the University of St. Gallen in Switzerland for the 2007-2008 academic year.

John graduated from Harvard College, the University of Cambridge, and Harvard Law School.  He was a Rotary Foundation Ambassadorial Scholar to the University of Cambridge and the U.S. EPA Gold Medal (highest national award).
 

Links

 

This event is being co-sponsored by the Harvard Law School Library and the Berkman Klein Center for Internet & Society at Harvard University.
 

Loading...

Categories: Tech-n-law-ogy

Pages

Subscribe to www.dgbutterworth.com aggregator - Tech-n-law-ogy