• HC Visitor
Skip to content
Information Ecosystems
Information Ecosystems

Information, Power, and Consequences

Primary Navigation Menu
Menu
  • InfoEco Podcast
  • InfoEco Blog
  • InfoEco Cookbook
    • About
    • Curricular Pathways
    • Cookbook Modules

Data

Zoom and Google Icon

EdTech Automation and Learning Management

2021-06-14
By: Mario Khreiche
On: June 14, 2021
In: Mario Khreiche
Tagged: automation, Data, edtech, LMS, privacy, zoom

The Political Economy of Education Technology The trend towards a greater presence of digital platforms in higher education has accelerated during the COVID-19 pandemic. Despite ongoing growing pains, remote learning tools have enabled many institutions to remain operational. The more or less seamless transition to virtual classrooms is largely owed to the ubiquitous use of Learning Management Systems (LMS), which have evolved from basic teaching support to robust infrastructures. According to estimates, already before the pandemic 99% of colleges and universities had integrated school-wide LMS while 85% of instructors had used LMS in class, whether in-person or distance learning, synchronous or asynchronous content. By the end of 2019, Instructure Canvas alone registered nearly 20 million enrollees in colleges and universities. As the LMS market saturates and new LMS adoptions decline, companies compete by promising advances in the areas of interoperability, customization, analytics, and design. The pandemic emergency clearly illustrated, for instance, the importance of seamless integration between video conferencing platforms and LMS. But instead of simply enabling remote learning, new education technology (EdTech) reimagines the pedagogic environment, extends performance metrics into virtual classrooms, and reshapes modes of participation and academic labor. For an already growing EdTech industry the pandemic turned out to be an exceptional boon. Against the backdrop of an estimated $76.4 billion market size in 2019, that is before the pandemic, analysts projected “a compound annual growth rate (CAGR) of 18.1% from 2020 to 2027”. But already the first three months in 2020 account for $3 billion of “global EdTech Venture Capital, nearly 10% Read More

The Changing Face of Literacy in the 21st Century: Dr. Annette Vee Visits the Podcast

2021-04-13
By: Jane Rohrer
On: April 13, 2021
In: Annette Vee
Tagged: artificial intelligence, computer code, Data, digital humanities, digital literacy, digitization, Education, programming

The English language is a tough one to master. It’s a language full of contradictions, exceptions to seemingly nonsensical rules, and confusing homophones. English Compositionists have spent decades studying how we learn to read and write it, and for most of that time, studies have focused on the language itself; using pens, pencils, and paper—or even a typewriter—little else would likely interfere with or distract from a basic writer’s journey toward mastery. Our April 5, 2021 guest on the podcast, Dr. Annette Vee, studies how writing, and the entire concept of literacy, has changed since the proliferation of digital technologies. For a student to be considered “literate” in an English Composition today, they must not only master the ins and outs of English itself—the minutia of commas, i-before-e, their/there/they’re—but also the computer or device they use to compose: the administrative and participatory tasks of their class’ Learning Management System, their word processing application, the host they send and read class-related emails through, and so much more. And as Dr. Vee points out, a student or employee who pursues a career that uses computers might also be required to learn a programing language before they are considered truly “literate” in the language of their professional world. A lot more goes into language-based literacy today than just words on a page. Dr. Vee is Associate Professor of English and Direction of the Composition Program here at the University of Pittsburgh, as well as a participant and Co-Leader of our Sawyer Seminar, originated in the fall of 2019 (And Read More

Cartogram of the 2008 US Presidential Election results

Election Maps, Purple States, and Visualizing Space: A Visit with Professor Bill Rankin

2021-01-04
By: Jane Rohrer
On: January 4, 2021
In: Bill Rankin
Tagged: Big Data, Bill Rankin, cartography, Data, data visualization, election maps, maps

On Friday, December 4, The University of Pittsburgh’s Mellon-Sawyer Seminar Information Ecosystems: Creating Data (and Absence) from the Quantitative to the Digital Age was joined by Bill Rankin, an Associate Professor of the History of Science at Yale University. Professor Rankin’s research focuses on the relationship between science and mapping, the environmental sciences and technology, architecture and urbanism, in addition to methodological problems of digital scholarship, spatial history, and geographic analysis. His prize-winning first book, After the Map: Cartography, Navigation, and the Transformation of Territory in the Twentieth Century, was published by the University of Chicago Press in 2016. Professor Rankin is also an award-winning cartographer, and his maps have been published and exhibited widely in the U.S., Europe, and Asia. Rankin talked with the Sawyer Seminar Participants, who are faculty and students at the University of Pittsburgh and Carnegie-Mellon University, about cartography, election mapping, and the contemporary U.S. political landscape. Amid the many reactions to and characterizations of the historic 2020 Presidential election, this meeting helped the Seminar participants understand how and why election mapping continues to play an increasingly crucial role in the electoral process; in particular, Rankin’s talk touched generatively about the concept of “purple states” or “purple places.” Purple has been, in recent years, offered as a more representative complication to the simple binarism of “blue,” or liberal, and “red,” or conservative states. The “red” versus “blue” state discourse began as a simple, visual way for newscasters to characterize a state’s partisan tendencies over long durations of time. And while we do Read More

Data, Desert Islands, and Digital Dark Ages: Richard Marciano on Records and Data Management

2020-10-31
By: Erin O'Rourke
On: October 31, 2020
In: Richard Marciano
Tagged: Data, Education, Information Science, Records Management, Richard Marciano

On November 1, Dr. Richard Marciano, a professor at the University of Maryland, asked Sawyer Seminar participants, “If you were on an academic desert island, what data would you bring with you?” After hearing about his career, which included working as a computational environmental scientist and at a supercomputing center, studying electrical engineering, and most recently, working as a professor and director of data curation initiatives at UMD, it was clear that Dr. Marciano has had to make decisions like this one numerous times. He discussed moving between jobs or even universities and bringing relevant data sets and sources with him into these roles. Consequently, he lends a fascinating perspective to data curation and records management, as well as pedagogy in these fields. Dr. Marciano first came to UMD when they were seeking professors to transform their Masters in Library and Information Science program and change the way students were trained in digital and computational methods. To balance the fact that he comes from a science background, he intentionally built teams with members from archival and library backgrounds. One of the courses he introduced was an eight-week intensive program across disciplines that uses digital methods to work through data problems. In teaching, he uses tools like Jupyter notebooks to create readable, touchable, interactive environments and learning spaces that others can build upon. In addition, he suggests universities create certificate programs for continuing education in digital methods for humanities and archival professions to keep up with current trends. For example, major curators of data like the National Read More

Embedded and Interdisciplinary: Generosity in the “Trade Zone”

2020-02-21
By: Sarah Reiff Conell
On: February 21, 2020
In: Edouard Machery
Tagged: collaboration, Data, digital humanities, Education, Information Ecosystems, Philosophy of Science

In a recent meeting of the Sawyer Seminar, Dr. Edouard Machery came to discuss the role of data in his work. He is a Distinguished Professor in the History and Philosophy of Science (HPS) Department at the University of Pittsburgh, and Director of the Center for Philosophy of Science. The HPS department seems to be inherently interdisciplinary, one that brings together apparently diametrically opposed methods, like statistics and philosophy. On their website, it states “Integrating Two Areas of Study: HPS supports the study of science, its nature and fundamentals, its origins, and its place in modern politics, culture, and society.” Though many, seemingly disparate skills are required for such a field, there was still interest in building a new domain, experimental philosophy. Dr. Machery engages in this area in his current research, as he states, “with a special focus on null hypothesis significance testing, external validity, and issues in statistics.” Engaging in such varied methods, and being interdisciplinary at a personal level is difficult (to say the least). If it is true what Malcolm Gladwell states, that mastery in a subject takes roughly 10,000 hours of practice, there are only so many fields of expertise one can cultivate in a lifetime. Working in a domain in which one has gained expertise also takes time. Is it like a language? Are there polyglot parallels? After acquiring four, does one get faster at accruing expertise? Many specialists were drawn to their field because of a passion for the subject, and proficiency materialized as advanced degrees, formalized proof of Read More

Research Software & Building Useful Data from Absence

2020-02-07
By: Jane Rohrer
On: February 7, 2020
In: Matthew Lincoln
Tagged: Curation, Data, data visualization, Information Ecosystems, Museums

On February 7th, one of the Seminar’s very own participants headed our lunchtime discussion; Dr. Matthew Lincoln, a research software engineer at Carnegie Mellon University Libraries, talked with us about museum informatics, archive management, and computational approaches to humanities projects. Although his transition to software engineer is relatively recent, his experience with data modelling and analysis is definitely not—before his move to Carnegie Mellon, Dr. Lincoln earned a Ph.D. in art history from University of Maryland, where he used computational methods to study 16th-18th century Dutch printmakers. This, along with his work on data engineering at the Getty Research Institute’s Getty Provenance Index Databases, makes him uniquely attuned to multiple aspects of building data sets and archiving. As Dr. Lincoln himself articulated during his talk, using large data sets as a Ph.D. candidate—what he worded as the “available technology”—alerted him to particular data absences within library and museum holdings; in other words, researchers can only carry out the large-scale digital projects that data actually exist for. If you’ve ever searched for an eBook only to find that a digital version of this text does not (yet) exist, you know this feeling; it is, on a smaller scale, the same feeling a researcher might have if they, for example, wanted to compare one particular library system’s entire collection to another—but there is no usable data with which to do such a project. The project idea is there, the necessary data is not. This is where and why Dr. Lincoln’s job becomes so essential; his work has helped Read More

What you can see in museums is just the tip of the iceberg

2020-02-06
By: Erin O'Rourke
On: February 6, 2020
In: Matthew Lincoln
Tagged: Curation, Data, Information Ecosystems, Linked Open Data, Matt Lincoln, Museums

While all of the Sawyer Seminar speakers so far have been scholars or users of information ecosystems, Matt Lincoln is potentially unique in coding them. His Ph.D. in Art History, time as a data research specialist at the Getty Research Institute, and most recently, work as a research software engineer at Carnegie Mellon University have given him substantial knowledge about museums’ information systems, as well as the broader context of the seminar. For Lincoln, “data” consists of collections of art and associated facts and metadata. In his public talk, entitled “Ways of Forgetting: The Librarian, The Historian, and the Machine,” Dr. Lincoln focused on a case study from his time at the Getty, in which he was working on a project restructuring the way art provenance data were organized in databases. Lincoln argued that depending on who the creator or end-user of the information would be (whether librarian, historian or computer), the way the data are structured can vary. A historian would likely prefer open-ended text fields in which to establish a rich context with details specific to the piece, whereas a librarian would opt to record the same details about every piece, and a computer would prefer the data to be stored in some highly structured format, with lists of predefined terms that can populate each field. On top of balancing these disparate goals, Lincoln cited a particularly poignant Jira ticket, which asked: “Are we doing transcription of existing documents or trying to represent reality?” This question might well be answered with “both” since the Read More

It’s not “just an algorithm”

2020-01-23
By: Erin O'Rourke
On: January 23, 2020
In: Safiya Noble
Tagged: Algorithms, Data, Information Science, Safiya Umoja Noble

Safiya Umoja Noble, known for her best-selling book, Algorithms of Oppression: How Search Engines Reinforce Racism, as well as her scholarship in Information Studies and African American studies at UCLA, visited Pitt the week of January 24. She spoke with participants in the Sawyer Seminar, gave a public talk and spoke with me in an interview for the Info Ecosystems Podcast. In Algorithms of Oppression, Dr. Noble described her experiences searching for terms related to race, women, and girls, such as “black girls” and encountering pornographic or racist content. These initial searches led her to years of study in information science, using the first page of Google search results as data. Coming from an advertising background before obtaining her Ph.D. in Library and Information Sciences, Dr. Noble was uniquely situated in the early 2010s to recognize Google for the advertising company it really is, while working in a field where many scholars around her viewed it as a source with exciting potential. Noble’s book examines what is present and absent in that first page of search results, and what those results say about the underlying mechanisms of organizing information and corporate decisions that enable those searches to occur. To open her public talk, Dr. Noble discussed several events that have occurred since her book was published in 2018. They notably included the exposure of Facebook’s privacy violations from 2017–2018 and the use of facial recognition technology by law enforcement and in public housing, despite research from Dr. Joy Buolamwini indicating that facial recognition and analysis algorithms are inaccurate and can be discriminatory when applied to people of color. Over Read More

Racism and Representation in Information Retrieval

2020-01-23
By: SE (Shack) Hackney
On: January 23, 2020
In: Safiya Noble
Tagged: Algorithms, archives, black history month, Data, digital humanities, diversity, Information Ecosystems, Libraries, racism

Happy Black History Month! (originally published February 2020) by S.E. Hackney On Thursday, January 23rd, Dr. Safiya Noble spoke to an overflowing room of students, faculty, and community members about her best-selling book Algorithms of Oppression. The thesis of the book, and of Dr. Noble’s talk, is that not only racism is actually built in to the search algorithms which we use to navigate the internet, but that the big players of the internet (Google specifically) actually profit off of that racism by tokenizing the identities of people of color. It does this by associating identity phrases such as “black girls” or “phillipina” with the sites with the most streamlined (aka profitable) SEO, which is often pornography. This is a system of classification explicitly based on the centering of the white experience and and othering of Black people and other people of color. However, as Dr. Noble spoke about in her talk, tweaking a search result or two to avoid offense doesn’t actually solve a systemic problem — one where white voices are treated as the norm, and others eventually become reduced to SEO tags to be bought and sold. This idea played out recently in Barnes & Noble’s miss guided Black History Month project, where public domain books where the race of the protagonist is not specified (determined by algorithm) have new cover art created for them, depicting the characters as People of Color. Rod Faulkner, who first brought this issue to widespread attention describes it as “literary blackface,” and points out, “Slapping illustrations of Black versions Read More

The History of Science & Big Data’s Place in the Humanities

2019-11-15
By: Jane Rohrer
On: November 15, 2019
In: Sabina Leonelli
Tagged: Big Data, Data, Open Data, Philosophy of Science

The Sawyer Seminar’s November 15 guest was Dr. Sabina Leonelli. Dr. Leonelli teaches Philosophy and History of Science at the University of Exeter, where she is also the co-director of the Egenis Centre for the Study of Life Sciences. Her book Data-Centric Biology: A Philosophical Study, was published by the University of Chicago Press in 2016. She is now working on translating her 2018 book, Scientific Research in the Era of Big Data, into English from its original Italian. Both deal abundantly with the recent shifts and innovations in how researchers process and understand scientific data. In both her public talk on Thursday, November 14, and Sawyer Seminar lunch discussion, Dr. Leonelli walked us through the fundamentals of and distinctions between Big Data, Open Data, and FAIR Data (Findable, Accessible, Interoperable, Re-Usable); these distinctions—and mindful discussions about them—is increasingly necessary as, to quote Leonelli in Data-Centric Biology, “the rise of data centrism has brought new salience to the epistemological challenges involved in processes of data gathering, classification, and interpretation and…the social structures in which such processes are embedded” (2). As Leonelli described it, Big Data is definied by their capacity to move, be (re)used across situations & disciplines, and (re)aggregated into different useful and usable platforms. To elaborate here: while there is “no rigorous definition of Big Data,” we use them, in general, to complete large-scale projects that may not valuably be done at a smaller scale, often to extract new insights about an entire world, community, or issue. Humanist examples of this in practice would Read More

Posts navigation

1 2 Next

Invited Speakers

  • Annette Vee
  • Bill Rankin
  • Chris Gilliard
  • Christopher Phillips
  • Colin Allen
  • Edouard Machery
  • Jo Guldi
  • Lara Putnam
  • Lyneise Williams
  • Mario Khreiche
  • Matthew Edney
  • Matthew Jones
  • Matthew Lincoln
  • Melissa Finucane
  • Richard Marciano
  • Sabina Leonelli
  • Safiya Noble
  • Sandra González-Bailón
  • Ted Underwood
  • Uncategorized

Recent Posts

  • EdTech Automation and Learning Management
  • The Changing Face of Literacy in the 21st Century: Dr. Annette Vee Visits the Podcast
  • Dr. Lara Putnam Visits the Podcast: Web-Based Research, Political Organizing, and Getting to Know Our Neighbors
  • Chris Gilliard Visits the Podcast: Digital Redlining, Tech Policy, and What it Really Means to Have Privacy Online
  • Numbers Have History

Recent Comments

    Archives

    • June 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • October 2020
    • September 2020
    • May 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019

    Categories

    • Annette Vee
    • Bill Rankin
    • Chris Gilliard
    • Christopher Phillips
    • Colin Allen
    • Edouard Machery
    • Jo Guldi
    • Lara Putnam
    • Lyneise Williams
    • Mario Khreiche
    • Matthew Edney
    • Matthew Jones
    • Matthew Lincoln
    • Melissa Finucane
    • Richard Marciano
    • Sabina Leonelli
    • Safiya Noble
    • Sandra González-Bailón
    • Ted Underwood
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Tags

    Algorithms Amazon archives artificial intelligence augmented reality automation Big Data Bill Rankin black history month burnout cartography Curation Darwin Data data pipelines data visualization digital humanities digitization diversity Education election maps history history of science Information Information Ecosystems Information Science Libraries LMS maps mechanization medical bias medicine Museums newspaper Open Data Philosophy of Science privacy racism risk social science solutions journalism Ted Underwood Topic modeling Uber virtual reality

    Menu

    • InfoEco Podcast
    • InfoEco Blog
    • InfoEco Cookbook
      • About
      • Curricular Pathways
      • Cookbook Modules

    Search This Site

    Search

    The Information Ecosystems Team 2023

    This site is part of Humanities Commons. Explore other sites on this network or register to build your own.
    Terms of ServicePrivacy PolicyGuidelines for Participation