Jennifer Mankin and Martina Sladekova
27 January 2026
First year core statistics and research methods module
Comprises lectures, skills labs, and practical sessions
What will you learn?
Module email: analysingdata.psychology@sussex.ac.uk
Module Discord rolled over from PaaS!
Confidential queries: J.Mankin@sussex.ac.uk
Repository of all administrative info about the module
Also hosts all session recordings under Panopto Recordings
Important
If you have a question about the module or assessments, check Canvas first!
Important
Important
Complete each week project on the Cloud!
Important
Skills Labs are not optional! They are distinct from the lectures and will contribute to your assessments.
If you attend any practical, you can access help and the quiz as normal.
More on this in just a moment…
See detailed information on Canvas
| What | Weight | When |
|---|---|---|
| Worksheet quizzes | 25% | Every week in practical sessions |
| Take-away paper (TAP) | 25% | 48-hour period, due Week 7 Wednesday |
| Research participation | 10% | Throughout term, due Week 11 Friday |
| Exam | 40% | A2 assessment period |
Each week on Friday evening we will send out a “mop-up” quiz code via Canvas Announcement.
To use this code to complete the quiz, at least one of the following must be true:
You will be able to attempt the quiz until midnight on the following Monday, at which point the quiz will close.
Important
If you complete the mop-up quiz, but have neither a notified absence nor an agreement with the module convenor, the mark will be replaced with a 0.
There is NO acceptable use of AI on this module.
Any suspected use of AI on any assessment will be treated as academic misconduct.
But what is the problem with getting a little help? Well…
These models have the following properties (quoted from Guest & van Rooij, 2025):
So generally includes “large language models” like ChatGPT, Claude, xAI/Grok, etc.
In essence, Large Language Models generate linguistic output by probabilistically identifying the most likely next token in a sequence across massive amounts of training input
They regularly produce “hallucinations” - ie bullshit (technical term)
When you interact with an LLM/GenAI chatbot, you think you’re asking:
“What is the answer to this question?”
But what you actually get is:
“What does a plausible answer to this question sound like?”
What does this mean for us as people who care about scientific accuracy, truth, fairness, and standards of evidence?
My position: Given what we know about the technology, there is no way to use it responsibly.
Mainstream (?) position: “Generative AI is okay to use as long as we acknowledge the harms and use it responsibly.”
Green washing / Critical washing (Guest et al. 2025)
What might “responsible” use look like?
(1) Fact. Check. Everything.
Find authoritative information to confirm:
Every statement
Every reference
Every line of code
Every conclusion
Every recommendation
[… why not use authoritative source in the first place and forgo the word lottery entirely?]
(2) Gain competence
Competence is NOT being able to judge whether something “sounds right”.
Competence in one area doesn’t automatically make you competent in another area.
Gen AI output does not represent consensus, it represents what is present in the training data.
Training data is known to be low quality, biased, contain misconceptions and non-sense (Kreutzer et al. 2022)
“I checked, and even ChatGPT said…” - No!
Any bias in the training data will be amplified in the output.
Impossible to check or safeguard
The information we get could be from a credible source… or from a random Reddit user.
It only takes a small number of instances to poison a training sample (Souly et al. 2025).
(1) Fact. Check. Everything.
Find authoritative information to confirm:
Every statement
Every reference
Every line of code
Every conclusion
Every recommendation
[… why not use authoritative source in the first place and forgo the word lottery entirely?]
(2) Gain competence
Competence is NOT being able to judge whether something “sounds right”.
Competence in one area doesn’t automatically make you competent in another area.
[… once you gain competence, generative AI loses any real utility]
Increasing push to:
Generate teaching materials with AI to be “more productive” (e.g. Department for Education 2025)
Teach students how to use AI to make them “employable”.
Learning happens through:
Effortful engagement with material
Productive struggling
Overcoming challenges
Repetition and practice - you need to do the thing to become good at the thing.
Perception of learning vs actual learning don’t always correlate (Deslauriers et al. 2019)
Learning stats is hard. Learning coding is hard.
Generative AI is antithetical to learning - removes friction and active engagement.
Routine AI use (1) creates cognitive debt by promoting cognitive offloading (Kosmyna et al. 2025) (2) negatively predicts critical thinking (Kulal 2025)
LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
Kosmyna et al. (2025)
No productivity gains for coders (Becker et al. 2025):
Deskilling (Guest et al. 2025) and degradation of actual employability skills with Gen AI use, like:
Problem solving
Critical thinking
Constructing arguments
Reading
Communication
Creativity
Education:
Skills:
Education:
Skills:
Statistical analysis (incl. the General Linear Model and its extensions like moderation, mediation, and factorial designs; addressing bias in models; robust estimation and hypothesis testing)
R Programming
Data visualisation
Critical evaluation
Scientific communication
Reproducible reports in Quarto
causes great environmental and societal harms
requires meticulous fact-checking or high levels of competence in order to use “responsibly” (defeating the point of using it in the first place)
is pushed as “The Skill” necessary to enter the workforce yet offers zero practical benefits
undermines learning, skills and productivity…
There is NO acceptable use of AI on this module.
Any suspected use of AI on any assessment will be treated as academic misconduct.
Hell yeah it is! That’s what we’re here for.
See you tomorrow for the first Skills Lab!