Jennifer Mankin and Martina Sladekova

27 January 2026


Welcome to Analysing Data!

Overview

  • Introductions
  • Module Structure and Sessions
    • Materials and Resources
    • Attendance Policy and Assessments
  • Data Skills and AI

Module Information

What It Says On The Tin

  • First year core statistics and research methods module

  • Comprises lectures, skills labs, and practical sessions

  • What will you learn?

    • Data literacy: how to think like a scientist about data
    • Working with data: cleaning, wrangling, summarising
    • R skills: using RStudio/Quarto, writing and reading code
    • Statistics: NHST, common tests, linear model

Contact

Module email: analysingdata.psychology@sussex.ac.uk

Module Discord rolled over from PaaS!

Dr Jennifer Mankin

Confidential queries: J.Mankin@sussex.ac.uk

  • Convenor and primary point of contact
  • Lectures, Skills Labs, Practicals
  • All module admin, assessments, queries

Dr Martina Sladekova

  • Co-convenor
  • Website Lead
  • Lectures, Skills Labs
  • Module admin

Dr Dom Makowski, Hanna Eldarwish, Fiona Lancelotte

  • Practicals

Materials and Resources

Canvas

Repository of all administrative info about the module

  • Schedule and syllabus
  • Assessment info and resources
  • Assessment submission points
  • Quiz and exam testing
  • Policies, rules, and guidelines

Also hosts all session recordings under Panopto Recordings

Important

If you have a question about the module or assessments, check Canvas first!

Website

  • Repository of (almost) all module content
    • ✨Lecture slides and tutorials
    • ✨Skills Lab example solutions
  • Contains everything you might read/refer to
    • Exception: Lecture/Skills Lab recordings on Canvas as usual

Posit Cloud

  • Weekly projects containing:
    • ✨Tutorial notebooks
    • ✨Skills Lab notebooks
    • ✨Practical worksheets
  • Contains everything you might do/complete
  • NO solutions released! Come talk to us :)

Important

Complete each week project on the Cloud!

Module Structure

Types of Sessions

Lecture

  • One-hour lecture session Tuesday mornings
  • Concepts, ideas, statistical tests and principles

Skills Lab

  • One-hour interactive, live-coding session Wednesday mornings
  • How to think about and work with data

Practical

  • Two-hour supported working time
    • Ask questions and get help
    • Complete the tutorial/worksheet
    • Take a quiz
  • Multiple sessions throughout the week

Important

Skills Labs are not optional! They are distinct from the lectures and will contribute to your assessments.

Attendance

Lectures/Skills Labs

  • Attendance is required and recorded via PIN
  • Delivered in person, recordings posted on Canvas

If you miss a lecture or skills lab…

  • Watch the recording on Canvas
  • Take notes and follow along with the materials
  • For Skills Labs, try out the code yourself
  • Ask questions on Discord, in your practicals, or come to a drop-in for extra one-on-one attention!

Attendance is Key

  • Strong recommendation to attend live sessions consistently
    • Use recordings to supplement or review, not replace, lecture attendance
  • Highest marks for students who attended live lecture and reviewed recordings (Bos et al., 2016)
  • Attendance and recording usage both predict achievement (Nordmann et al., 2019)
  • Guidelines for students (Nordmann et al., 2020)

Practicals

  • Attendance is required and recorded manually
  • Interact in some way with tutors (any way is fine)
    • Ask questions, ask for help, get your work checked before the quiz!

If you attend any practical, you can access help and the quiz as normal.

If you miss a practical…

  • You can attend another practical in the week to take the quiz, but you will not be marked present
  • If you can’t attend another practical, you must notify of your absence in order to be able to use the mop-up quiz

More on this in just a moment…

Changing Your Timetable

Assessments

All Assessments

See detailed information on Canvas

What Weight When
Worksheet quizzes 25% Every week in practical sessions
Take-away paper (TAP) 25% 48-hour period, due Week 7 Wednesday
Research participation 10% Throughout term, due Week 11 Friday
Exam 40% A2 assessment period

Worksheet Quizzes

  • Before or during each practical, complete a worksheet
  • In the second hour, complete a marked quiz
    • Next week: practice only!
  • Covers the lecture, skills lab, tutorial, and worksheet
  • This week: Practice quiz on Canvas anytime
  • Next week: Practice quiz in practicals only!
  • Week 3: First marked quiz

Worksheet Quizzes - Mop-Up

Each week on Friday evening we will send out a “mop-up” quiz code via Canvas Announcement.

To use this code to complete the quiz, at least one of the following must be true:

  • You were absent from your practical session that week and have been marked absent
  • You were unable to complete your quiz in your practical due to circumstances outside your control (e.g. your computer broke, WiFi failure), and have contacted the convenor to agree to use the mop-up quiz.
  • You are registered with Disability Advice and have reasonable adjustments in place, and have contacted the convenor to agree to use the mop-up quiz for accessibility reasons.

You will be able to attempt the quiz until midnight on the following Monday, at which point the quiz will close.

Important

If you complete the mop-up quiz, but have neither a notified absence nor an agreement with the module convenor, the mark will be replaced with a 0.

DO THE PIN

AI

AI Use Policy



There is NO acceptable use of AI on this module.

Any suspected use of AI on any assessment will be treated as academic misconduct.




But what is the problem with getting a little help? Well…

The Cost of AI: Climate Disaster

The Cost of AI: Destruction and Theft of Creative Industries

The Cost of AI: Privacy, Reality, and Human Rights

What is “AI”, Anyway?

  • A marketing term that covers a huge number of technologies, methodologies, and models, including:
    • Spellcheck software
    • Text autocomplete
    • Automatic caption generation
    • Machine learning
    • “The Algorithm”
    • Statistical models of various kinds
    • Image/video generation and detection
    • Generative AI/Large Language Models (LLMs)

What is “Generative AI”?

These models have the following properties (quoted from Guest & van Rooij, 2025):

  • Are sophisticated statistical models so large they impact humans and the environment through their energy, land, and water use
  • Depend on vast swathes of data, which is mostly stolen or otherwise unethically obtained or refined
  • Can represent various statistical distributions and so can be discriminative, generative, or neither
  • Exist in a displacement relationship to humans, i.e. this type of AI product is harmful to people, it contributes to deskilling, and it obfuscates cognitive labour

So generally includes “large language models” like ChatGPT, Claude, xAI/Grok, etc.

What is “Generative AI”?

In essence, Large Language Models generate linguistic output by probabilistically identifying the most likely next token in a sequence across massive amounts of training input

  • “Token”: character, word, phrase, etc.

They regularly produce “hallucinations” - ie bullshit (technical term)

  • This is an inherent property of LLMs (Lee, 2023)
  • They are constructed to be confident, conversational, and helpful
  • When there is a gap in the training data, they construct a plausible answer given the information they do have

What is “Generative AI”?


When you interact with an LLM/GenAI chatbot, you think you’re asking:


“What is the answer to this question?”


But what you actually get is:


“What does a plausible answer to this question sound like?”

So…




What does this mean for us as people who care about scientific accuracy, truth, fairness, and standards of evidence?

Using Generative AI “responsibly”

  • My position: Given what we know about the technology, there is no way to use it responsibly.

  • Mainstream (?) position: “Generative AI is okay to use as long as we acknowledge the harms and use it responsibly.”

  • Green washing / Critical washing (Guest et al. 2025)

  • What might “responsible” use look like?

Using Generative AI “responsibly”

(1) Fact. Check. Everything.

  • Find authoritative information to confirm:

    • Every statement

    • Every reference

    • Every line of code

    • Every conclusion

    • Every recommendation

[… why not use authoritative source in the first place and forgo the word lottery entirely?]

(2) Gain competence

  • Competence is NOT being able to judge whether something “sounds right”.

  • Competence in one area doesn’t automatically make you competent in another area.

Judgments with/without competence

Judgments with/without competence

Judgments with/without competence

Judgments with/without competence



Randomly generated output is not evidence.

Gen AI output does not represent consensus, it represents what is present in the training data.

Training data is known to be low quality, biased, contain misconceptions and non-sense (Kreutzer et al. 2022)

“I checked, and even ChatGPT said…” - No!

Judgments with/without competence

  • Any bias in the training data will be amplified in the output.

  • Impossible to check or safeguard

Judgments with/without competence

  • The information we get could be from a credible source… or from a random Reddit user.

  • It only takes a small number of instances to poison a training sample (Souly et al. 2025).

Judgments with/without competence

Using Generative AI “responsibly”

(1) Fact. Check. Everything.

  • Find authoritative information to confirm:

    • Every statement

    • Every reference

    • Every line of code

    • Every conclusion

    • Every recommendation

[… why not use authoritative source in the first place and forgo the word lottery entirely?]

(2) Gain competence

  • Competence is NOT being able to judge whether something “sounds right”.

  • Competence in one area doesn’t automatically make you competent in another area.

[… once you gain competence, generative AI loses any real utility]

Gaining competence

Increasing push to:

  • Generate teaching materials with AI to be “more productive” (e.g. Department for Education 2025)

  • Teach students how to use AI to make them “employable”.

    • Follow the echo - people shouting the loudest are tech CEOs who want students to use their product (Instructure 2025)

Gaining competence

Learning happens through:

  • Effortful engagement with material

  • Productive struggling

  • Overcoming challenges

  • Repetition and practice - you need to do the thing to become good at the thing.

Perception of learning vs actual learning don’t always correlate (Deslauriers et al. 2019)


Learning stats is hard. Learning coding is hard.

Learning is hard.

Preventing competence

  • Generative AI is antithetical to learning - removes friction and active engagement.

  • Routine AI use (1) creates cognitive debt by promoting cognitive offloading (Kosmyna et al. 2025) (2) negatively predicts critical thinking (Kulal 2025)

LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.

Kosmyna et al. (2025)

Preventing competence

No productivity gains for coders (Becker et al. 2025):

Preventing competence

  • Deskilling (Guest et al. 2025) and degradation of actual employability skills with Gen AI use, like:

    • Problem solving

    • Critical thinking

    • Constructing arguments

    • Reading

    • Communication

    • Creativity

“Prompt engineering” is not a skill.

Which CV will get you hired?

Education:

  • BSc Psychology | University of Sussex |2028

Skills:

  • Prompt engineering 🔄 regenerate response

Education:

  • BSc Psychology | University of Sussex | 2028

Skills:

  • Statistical analysis (incl. the General Linear Model and its extensions like moderation, mediation, and factorial designs; addressing bias in models; robust estimation and hypothesis testing)

  • R Programming

  • Data visualisation

  • Critical evaluation

  • Scientific communication

  • Reproducible reports in Quarto

Generative AI…

  • causes great environmental and societal harms

  • requires meticulous fact-checking or high levels of competence in order to use “responsibly” (defeating the point of using it in the first place)

  • is pushed as “The Skill” necessary to enter the workforce yet offers zero practical benefits

  • undermines learning, skills and productivity…


… and therefore cannot be used responsibly

AI Use Policy



There is NO acceptable use of AI on this module.

Any suspected use of AI on any assessment will be treated as academic misconduct.

But this is haRd!

Hell yeah it is! That’s what we’re here for.

  • We LOVE helping students understand hard things!
    • It’s our actual job and also something we are passionate and enthusiastic about
  • Come to practicals even (especially!) if you’re struggling
    • We will sit with you and help you step by step
  • Come to drop-ins if you want extra help
    • We will answer questions, help you plan study strategies, listen to you

That’s All (For Now…)

See you tomorrow for the first Skills Lab!

References

Becker, Joel, Nate Rush, Elizabeth Barnes, and David Rein. 2025. “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” https://arxiv.org/abs/2507.09089.
Bergstrom, Carl, and Jevin D. West. 2020. Calling Bullshit: The Art of Skepticism in a Data-Driven World. Vol. 369. 6507. https://doi.org/10.1126/science.abd9788.
Department for Education. 2025. “Generative Artificial Intelligence (AI) in Education.” Department for Education. https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education.
Deslauriers, Louis, Logan S. McCarty, Kelly Miller, Kristina Callaghan, and Greg Kestin. 2019. “Measuring Actual Learning Versus Feeling of Learning in Response to Being Actively Engaged in the Classroom.” Proceedings of the National Academy of Sciences 116 (39): 19251–57. https://doi.org/10.1073/pnas.1821936116.
Frankfurt, G. Harry. 2005. On Bullshit. Princeton University Press. https://doi.org/10.2307/j.ctt7t4wr.2.
Guest, Olivia, Marcela Suarez, Barbara Müller, Edwin van Meerkerk, Arnoud Oude Groote Beverborg, Ronald de Haan, Andrea Reyes Elizondo, et al. 2025. “Against the Uncritical Adoption of ’AI’ Technologies in Academia,” September. https://doi.org/10.5281/ZENODO.17065098.
Instructure. 2025. “Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences Within Canvas.” Instructure. https://www.instructure.com/en-gb/press-release/instructure-and-openai-announce-global-partnership-embed-ai-learning-experiences.
Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. 2025. “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task.” https://arxiv.org/abs/2506.08872.
Kreutzer, Julia, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, et al. 2022. “Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets.” Transactions of the Association for Computational Linguistics 10: 50–72. https://doi.org/10.1162/tacl_a_00447.
Kulal, Abhinandan. 2025. “Cognitive Risks of AI Usage: How Literacy-Driven Trust Modulation Protects Critical Thinking-A Mixed Methods Approach.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.5355424.
Souly, Alexandra, Javier Rando, Ed Chapman, Xander Davies, Burak Hasircioglu, Ezzeldin Shereen, Carlos Mougan, et al. 2025. “Poisoning Attacks on LLMs Require a Near-Constant Number of Poison Samples.” https://arxiv.org/abs/2510.07192.