But is it safe?
Maren and Jim over at Reclaim are starting a blogging community, so it feels like the 2000s again. Get a Northern Voice conference going in Canada someone, before it becomes part of the USA! I don’t really need encouragement to blog, but if you do, Maren has some advie on getting your blogging mojo back. I’m trying not to do that thing of linking everything to “current events”, but I do feel that having your own platform, your own voice, community and identity when so much of that is controlled by people you wouldn’t trust with a glue pen, does add extra currency to blogging.
Brian Lamb (he releases blog posts with the frequency of The Stone Roses releasing albums) uses prompts from Tom Woodward (more of a Taylor Swift release cycle), so in the interest of saving cognitive effort, I’ll use them too.
Why did you start blogging in the first place?
Two of my OU colleagues, John Naughton and Tony Hirst were avid bloggers at the start of the 2000s, and enthused about the process of this form of writing. I was all about the internet and elearning back in the day, so I thought I’d give it a try. After a couple of false dawns I settled on edtechie initially over at Typepad and then on Reclaim.
What platform are you using to manage your blog and why do you use it?
WordPress, hosted on Reclaim. I’m not really a WP expert or advocate, but it does what I need, and I can easily find plug-ins and themes for my needs. There has been some dodgy dealing around WP recently but it doesn’t tie me into a particular provider.
Have you blogged on other platforms before?
I tried Blogger for one of those abortive early attempts, and then used Typepad before Jim Groom lured me over to the Reclaim side with promises of free biscuits.
How do you write your posts?
Badly (ba-dum tss). I vary but I often have an idea that might arise from a conversation, or something I’ve read that percolates around for a few days. Then I bash it out in one sitting, usually it only takes 20 mins or so, as I’ve done a lot of the thinking (if it’s required) by then. I also deliberately don’t want to make these into fully formed academic essays, they are thoughts I have along the way and I don’t want the pressure of making them perfectly formed to get in the way of writing.
When do you feel most inspired to write?
Going to conferences often used to provoke a post or two, when I’d be in a session and something big or small would trigger off a thought. Or sometimes it’s in reaction to something – usually stupid about ed tech. Maren and I have a lot of conversations, usually when walking the dogs, and these can lead to writing posts.
Do you normally publish immediately after writing, or do you let it simmer a bit?
Publish, read it and see how many typos I’ve made, edit it, publish again and then forget about it.
What’s your favorite post on your blog?
I’m often surprised by things I’ve written about and then forgotten. Rather immodestly I sometimes find an old post and think “that’s really rather good. Well done me.” I don’t have a favourite post, but readers will know I like an overstretched metaphor, so something like 10 Lessons from Apocalyptic Literature or Edtech & Symbols of Permanence I think get at the playful possibilities of blogging while also, hopefully, saying something thoughtful. That’s why I love blogging, you can’t do that stuff in an academic journal.
Any future plans for the blog?
Not really, now that I’ve left the OU and I’m not involved in educational technology as much, I wonder if the identity of the blog should change. But I’ve always had a personal element to the blog, it’s not like it’s an ed tech newsletter, so I think I will persist here. But I guess people may have to get used to fewer posts about open education and more about baking pies and walking Teilo on the beach.
(Image – Life is sharing, CC-BY some geezer called cogdog, who he?)
I’ve been reading Brian Merchant’s Blood in the Machine recently. It’s an engaging account of the Luddite rebellion, which is well researched and told, but what really brings it to life are the direct comparisons he makes with Silicon Valley entrepreneurs and the mill owners, who use technology to accrue capital in the hands of a few, and take agency from working people. The fact that “luddite” is a derisory term instead of championing people who fought for their livelihood and humanity is a victory of those same entrepreneurs.
Anyway, as was the intention of the book, it got me thinking about AI. He makes the point that the Luddites were not anti-technology, they were anti “technology being used to enrich a few and strip everyone else”. They were pro some technologies, for instance he gives an example of a tool that could automatically assess the quality of weave. The mill owners didn’t want this technology however, as they preferred to be the sole arbiters (and thus payers) of quality. This got me thinking, what would AI not driven by entrepreneurs look like?
Before I start on this, I need to be clear – I am not advocating for the use of AI tools such as I set out here. I think they would be gamed and probably disastrous. But they are no more fanciful than the applications we are seeing proposed. My point here is to demonstrate that when AI proponents state that AI is inevitable, the model they are proposing is one that is steeped in ideology. By looking at possible alternatives, this becomes apparent.
Let’s start relatively small. Social media is a toxic dumpster fire emitting fumes globally, right? Partly this is down to AI bots, so you could equally use AI bots to train to find and delete posts that promote disinformation, hate speech, etc. Bots who’s aim is to improve the overall quality of the online communication space. I wonder why Poundshop Hitler doesn’t want to implement that on X?
Let’s go bigger. How about an AI system that monitors the housing market and allocates resources to build houses most in demand, and sets rental prices to the maximum benefit of society as a whole? Or full on AI socialism, that dynamically taxes (entrepreneurs love dynamic pricing after all) and reallocates wealth according to the utilitarian benefit of the nation as a whole? Richard Eskow makes the point that AI is trained on our data, so we should own it.
Just to reiterate, these would probably all be a nightmare, but no more than AI infiltrating your workplace. It’s noticeable that improved efficiency is the number one benefit of AI that people promote (all those effing summaries). Why not improved equity, social justice, happiness even? So, the next time a tech bro is advocating about the coming AI singularity, respond by saying you look forward to the AI Socialist Wealth Redistribution System. They will miraculously find reasons why that couldn’t possibly work…
Maren celebrated the 101st episode of her podcast recently, and I was the invited guest. We riffed off the idea of Room 101. If you don’t know this it borrows the idea of Orwell’s Room 101 which contains your biggest fear, which was converted into a light entertainment radio and TV programme where people nominate pet peeves to go into Room 101 so we don’t have to experience them anymore. Going into the new year we volunteered what we would like to put into Room 101 for 2025. Here were my options:
Anything “bro” – I was watching the US coverage of the election back in November, and I knew we were doomed when the commentators seriously debated the “bro-vote”. Tech-bros, gym bros, AI bros, podcast bros – it never ends well and usually denotes a bullish, unreflective and most damagingly unhumorous approach. The only acceptable bro-ing is when I take my dog Teilo for a long walk, and we have some bro-time.
Unwanted AI in every product – British cuisine is often derided as “chips with everything” and the current interpretation of innovation seems to be AI with everything. AI is surely useful, but this blanket application because, hey, if we say it’s got AI, we’ll look modern. It is usually just pointless, but is often annoying and at worst is sinister, mining data and trying to profile me. I think many people are finding the “AI with everything” approach a turn-off despite what marketing gurus think.
The “death of the university” articles – MOOCs, blockchain, AI – these were all going to kill the university or at least mark a distinct revolution. I posit the idea that journalists who write puff pieces about the death of the university should be legally obligated to return to that story 3 years later and see how it has turned out.
Personally doomscrolling – this is one for me, to avoid some of the nasty noisiness in the world coming our way in 2025. Bro-time with Teilo is the answer.
So those were my choices, listen to the show to find out what Maren put in Room 101. What would be your contenders?
We went to the coast for a week over Christmas, and had an unexpectedly sunny day on Boxing Day, the drinks in the picture above were outside a pub in Tresaith.
The end of an eventful year, during which I left the Open University, became semi-retired, got engaged and had to do a lot of emergency care for elderly parents. It seems odd now to think I was still working at the OU 12 months ago, the human ability to adapt to a new context and take it as the new norm is always a surprise. And speaking of new norms, 2025 looks set to be a shitfest right, so start erecting those cognitive defences now.
I signed off on my N-Tutorr report this month, my report acts as an overview, and I enjoyed flexing the writing muscle again. They should all come out in the new year. One of the things I’ve tried to emphasise in my report, which looks at the impact of five technological trends on higher ed, is the old Hitchhiker’s Guide to the Galaxy advice of Don’t Panic. I think a lot of the ed tech industry relies on generating a sense of panic – that feeling that if you don’t engage with [Insert New Tech] now, and engage with [Insert New Tech] completely, then it will be too late, and all will be lost. This is a useful notion to foster for purveyors of [Insert New Tech], because quite often once the dust has settled, the actual benefits are less all pervasive than initially trumpeted and uptake would be less. But that doesn’t matter because now we’re onto [Insert New Tech 2]. This is different from saying that these technologically driven changes to education are not useful, but that the scale and immediacy is not at the levels pitched in the media. I’m sure you can think of one current tech that fits that bill…
Books
I finished the year having read 171 books. I read a lot, across all formats, and across a lot of genres. Admittedly I read a lot of ‘low-brow’ fiction, mainly horror, for entertainment, but the notion that reading has to always be worthy is much to its detriment. Audrey has some interesting reflections on reading this month, we follow each other on Goodreads so I often pick up on recommendations based on her reading (I don’t think the reverse happens much, sorry Audrey!). Like a 6 year old in a playground, I’m asking if you want to be my friend too.
Anyway, let’s see what 2025 brings. I keep expecting to have a quiet year, and then “stuff” happens.
(Spot on image, update from his earlier version by Chaz Hutton https://www.instagram.com/p/DDbmv-yNijL/)
When explaining the concept of enshittification, Cory Doctorow sets it out as the manner in which platforms die “First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.”
It speaks to a general degradation of experience, of the thing that once drew you to a service or platform becoming increasingly lost, amidst adverts, trolls, bad user experience, and principally through a lack of care for the users. This can be done because users are locked in, as John Naughton notes, through a lack of interoperability and the network effect. So we have to just put up with more crap, as the platforms get rinsed for everything they can.
I was thinking about this recently when I saw Zoom’s new announcement (I thought it was parody at first, but alas, no, these tech dudes mean it). In it they state that “Zoom is now about so much more than video meetings. We are an AI-first company…” Then they go onto claim that their AI Companion will “Over time… translate into a fully customizable digital twin equipped with your institutional knowledge, freeing up a whole day’s worth of work and allowing you to work just four days per week.”
Let’s just take a moment to consider how batshit crazy that is. Will all our digital twins have meetings? Will I come back to work to find my digital twin has volunteered real me to take on more tasks? I’ll bet our digital twins won’t ask any awkward questions of company policy, though. And yeah, that age old promise of “it’ll free up more time”, and not, you know, be used as an excuse to lay people off.
I’ve seen calls for an AI university, for AI to make medical diagnoses, and as I highlighted in the last post, AI as the gatekeeper to knowledge sources. And as we already know, try speaking to a person at a bank, service provider and increasingly Doctor’s surgery.
A while ago, I suggested that if you wanted to know what the wealthy proponents of AI really valued, then watch their own behaviour. Now ask yourself, would Elon Musk be happy to have a meeting with your digital twin instead of the real you? Would they send their kids to AI schools, or be subject to AI medical diagnosis. Somehow I doubt it. But they have money, so can get the best doctors, teachers, investment bankers and so on. The AI version is for the rest of us.
This is enshittification – not of a platform, but of your life. Many aspects of your everyday life will be downgraded to the AI-as-default version. The things you currently do and take for granted will be sold back to you as the premium version. For a monthly subscription you can have access to real people, otherwise it’s AI for you. Paying for access to better service has always been the case of course, but AI allows for it on steroids due to a combination of two factors: performance and cost. It’s just good enough to meet most needs without damaging the customer base and it’s a lot cheaper to run than people. Also, as with platform enshittification, now that everyone is doing AI, where are you going to go? There’s no escape.
The demands of capitalism turn AI into an enshittification engine, and we’re locked in. This is almost inevitable with the combination of those two factors of performance and cost. So, the next time you see people propounding how AI will improve your life and efficiency, substitute “the enshittification of my life” for all those promises and you’ll be closer to the truth. But let’s not feel hopeless, as Doctorow says, it’s a choice. You can choose to use different platforms and providers (sometimes) and we can all choose to value and promote good human actions. I would suggest that openness also offers some antidote to this. As does not promoting every piece of tech utopian bullshit. This is not about AI being useful, it clearly is in many places, rather it’s about what happens when it gets enmeshed in society and capitalism without due care.
Anyway, merry Christmas everyone.
In one of our dog walking chats, Maren and I were talking about AI (I know, I know), and she was saying she didn’t really see the benefit of it in most circumstances. I was trying to be the AI pragmatist and responded that “it’s good for summarising documents”. To which Maren replied “but I don’t want that, I want to read the thing”. And Audrey has made a similar point in her newsletter pointing out that the summarised version of knowledge is “more efficient, to be sure. It’s also much, much safer.” The Apple Intelligence adverts you may have seen also make the ability to create quick summaries the main selling point.
David Wiley gives an example of how he’s “created a research assistant agent that reads through preprints each morning and identifies the latest research on the impact of generative AI on teaching and learning”. I can see how that would be useful, and I know from having a daughter in higher education, it’s useful to get a summary of research in a field that you don’t need to go into depth for. I still think summarising is useful. But Audrey and Maren have made me ask the question is it that great? First, if we just want the summary version of everything, then why are we writing all this other stuff to start with? Second, just how reliable is that summary you are getting, and will it miss the interesting nuance. Third, doing that reading, watching, listening is often actually the thing you want to be doing, the process is the point. Anyone who watches sport knows there’s a lot of difference between watching a full game and getting the highlights.
But beyond the whole “summary fetishisation” there is a more worrying trend. That is when you can only get access to the AI summary. Tom Scocca reports on how The Washington Post has (or is experimenting with) removing its archives, and instead giving access to AI trained on its archive. So much more efficient, right? Needless to say, anyone wanting to do actual research will want access to full archives, not just some bland summary of them. As Scocca puts it “No one who cared about the purpose of the Washington Post or the purpose of the Washington Post archive would have ever allowed the Ask The Post AI to be deployed. But the world has allowed the management of knowledge to be taken over by ignoramuses, and now the ignoramuses have built ignoramus machines in their own image, manufacturing non-knowledge on a scale previously unimaginable.”
You can see lots of others going the same way, we only need summaries, let the AI do the hard work. This would be a disaster for our information ecosystem. Summaries summarised into new summaries. No actual knowledge, just summaries all the way down. It’s an example of two things – first the unquestioning belief that summaries are always a good thing, and second handing over systems to AI without due care to consequences.
For fun, here is the AI summary (from Pop AI) of this post. I will say, it’s pretty good. But it lacks that Weller charm right?
________
OverviewThe document discusses the implications of using AI for summarizing content, highlighting both its benefits and potential drawbacks.
Key PointsThe document raises important questions about the role of AI in summarization, advocating for a balanced approach that values original content and critical engagement with information rather than solely relying on AI-generated summaries.
(Image – Summary judgement by Nick Youngson CC BY-SA 3.0 Pix4free)
It’s been a very good year for vinyl, with lots of top-of-their-game releases from favourite artists and a few new ones I’ve discovered. I’m stealing Pitchfork’s use of RIYL (Recommended If You Like) this year, so here are ten of the new releases I’ve enjoyed the most this year:
Bill Ryder-Jones – Iechyd Da. RIYL: sitting on the sofa wearing a hoodie and eating cheese puffs while watching epic movies in the afternoon; finding patterns of beauty in smoke eddies.
Brittany Howard – What Now. RIYL: Microdosing at a barbecue; nu-retro vintage clothing.
Hurray for the Riff Raff – The Past is Still Alive. RIYL: Reading Cannery Row on an abandoned train; hanging around on industrial estates after dark.
Aaron Frazer – Into The Blue. RIYL: Wearing a hat at a jaunty angle; ironic dad dancing.
Waxahatchee – Tiger’s Blood. RIYL: Chugging beer in a pickup, crushing the can and tossing it out of the window while bellowing to the radio.
Casandra Jenkins – My Light, My Destroyer. RIYL: Thinking big thoughts about the stars and distance; spending one hour looking at a single painting in the museum.
MJ Lenderman – Manning Fireworks. RIYL: Raymond Carver stories; Sean Baker films; Raymond Carver stories directed by Sean Baker.
Laura Marling – Patterns in Repeat. RIYL: Analysing the hidden meaning in kids TV; sacred moments of silence and peace while on the loo.
Beth Gibbons – Lives Outgrown. RIYL: Contemplating life’s meaning while gardening; making a big pot of tea to accompany your screaming into the void session.
Nilüfer Yanya – My Method Actor. RIYL: Feeling nostalgic about thinking 30 was getting old; Secretly playing indie rock in your Airpods at a nightclub.
Bonus album – this one actually came out in 2023, but I didn’t get it until this year and it would be a shame for it to miss out, because it’s a firecracker of an album, and Raye made the BBCs list of 100 most influential women.
Raye – My 21st Century Blues. RIYL: Winding up tech bros on social media; Female revenge movies.
Remember, if you like seeing pictures of album covers out on the wild in Wales (I mean, who doesn’t?) there is my rather niche Instagram account. Here is my Spotify playlist of these and other vinyl purchases this year:
(My post-apocalyptic survival skill is making pies)
I’ve been trying, unsuccessfully, to avoid the whole US election fallout this month. So let’s get that out of the way. Amongst many depressing things that has been noticeable since that night in November is the complete failure of traditional political commentating. They are still applying the idea of the rational voter, so end up effectively asking questions such as “what policy of the deranged, self-declared tyrant really appealed to you?” or “where did the Democrats campaign go wrong in failing to appeal to the supporters of a man who thinks Hannibal Lecter is real?” I’m no US political analyst but this doesn’t seem to be a failure on the part of the Democrats, or Harris. It is a failure of the US electorate (or at least a good chunk of them), and they will have to own it. It’s a shame it has to take everyone else down as well.
Anyway, in other news, I completed a report for the N-TUTORR project on digital transformation in higher ed. There is a suite of these reports coming out in the new year, from a range of excellent authors, so look out for them. I’ll blog more about the content of mine when it is published. Rather like Trump, I feel like I’ve been trying to avoid reading or writing about AI this month, and again failing. Like Trump, AI is just so noisy. One positive of the election has been the mass exodus to BlueSky which feels like the old days of Twitter, at least for a while. I am @edtechie.bsky.social over there. I’m trying to get some momentum back into social media posting (no, I don’t know why either), so at the moment I’m trying the scattergun approach across interests.
Books
One of those interests is reading. This month I reached my 2024 reading goal of 150 books with over a month to spare. I know having reading targets is a contentious idea, but I didn’t feel pressure to hit that target, it is now just more part of my daily practice. This month I read a couple of books with a theme which might be labelled “strangely hopeful”. The first was a 1995 Belgian, dystopian, feminist sci-fi book called I Who Have Never Known Men, by Jacqueline Harpman. It’s an extraordinary book, with a group of women finding themselves captive on an unknown planet for an unknown reason. Their guards disappear suddenly, for reasons also unknown. And they never get answers (except to find similar bunkers holding men or women, all of whom didn’t escape). But it ends with musings on hope and purpose. Similarly thought-provoking was Cal Flynn’s Islands of Abandonment. She visits sites that have been abandoned by humans, for example Chernobyl, and repeatedly finds that nature has flourished in these zones. In ecology’s version of man or bear, it seems that given the choice between radiation (or other seemingly unfavourable conditions) and the presence of humans, it is always better not to choose the human option.
Definitely not hopeful reading is Annie Jacobsen’s impeccably researched Nuclear War: A scenario. She takes us through all the decision making procedures, actions and consequences of an imagined nuclear strike on the US. You will be unsurprised to hear that it doesn’t end well for anyone. I grew up in the nuclear angst of the 80s (we used to doodle atomic mushroom clouds on our school books, like it was normal), I agree with the author’s contention that we’ve become complacent about the nuclear threat after the Cold War. But the threat is still very real, and what the book brings home is the dizzying escalation. Within hours we’ve gone from normal life to global armageddon without any of the usual careful escalation tactics. You could have a nap and wake up to find the world has ended. The book also reinforces the absolute power of the US president. In this interview she also makes the point that you want the Commander in Chief who “is of sound mind, who is fully in control of his mental capacity, who is not volatile, who is not subject to anger”. So, nothing to worry about there then.
Vinyl
A new Laura Marling album is always a treat and her new album, Patterns in Repeat sees her reflecting on parenthood. Also, I picked up a reissue of an old favourite of mine, and a dusty, Americana classic Giant Sand’s Chore of Enchantment. There is a “lost years” period for lots of vinyl issues during the late 90s, when CD was so popular, and everyone so assured of the decline of vinyl that new releases either didn’t come out on vinyl at all or had limited release. This was one of them, and I had previously owned it on CD, and any vinyl copies went for a lot of money. So, hurrah for reissues. If we all have to go and live in the desert after nuclear war, this will be a fine soundtrack.
As the great Xodus to BlueSky gathered pace over the past fortnight it was fun (ie, not fun at all) to see the entirely predictable “it’ll just be an echo chamber in BlueSky” pieces. Because they are attempting to legitimately monitor content lots of trolls feel hard done by. “Come back” they say “the racists and misogynists just want to chat”. Before all the Mastodon gang pile in, I want to stress that this isn’t necessarily a pro-BlueSky piece, more an anti-X one. I’ve seen enough enshittification to know that BlueSky will probably go that way on day too. But for now, let us enjoy the frothing from the Musk fanboys.
The first argument they like to put forward is that, hey, they like to hear the views of different people because they’re open-minded. What they usually mean is they like to shout at people who they disagree with because that’s how they get their kicks and how dare you take that fun away from them.
Others protest how will we know what the far right are thinking if we don’t have a shared platform? LOL, you could go and live in a hut in Tristan da Cunha and Trump, Musk, Murdoch etc are so noisy that you’d still hear them. Or a variation on this is that we should all engage more. Yeah, because famously the likes of Trump, Farage, etc are all about the two-way engagement. As this nice piece of satire puts it “But some snowflakes didn’t like constantly being bombarded with all of those valid right-wing concerns about the economy, and taxes, and what kind of genitals everyone should be allowed to have.
John Naughton says BlueSky feels like a breath of fresh air, and I agree. I don’t use social media anywhere near as much as I used to, and when I do, you know what, I kind of want to find it enjoyable. And not be immersed in crap. But I’ll go further, the fact that so many of the people you don’t want to hear from think you shouldn’t be on BlueSky (or Mastodon, or Threads) is a compelling argument to join. It’s an act of mini-resistance. They want to, as I said in the last post, operate their “flood it with shit” policy and if you’re not there, then they can’t. They also don’t want you to be off enjoying yourselves somewhere else, they rely on people being ground down and miserable. So, yeah, head off to BlueSky or wherever and chat about the weather, cats, food, sports, reasonable politics without the reply guys popping up to tell you, well actually. We used to worry about the echo chamber a lot back in the early days of Twitter, and now look back on those days fondly. It’s not really an echo chamber, it’s just ignoring assholes.
(Image via https://commons.wikimedia.org/wiki/File:Disinformation_and_echo_chambers.jpg)
Continuing my annual series of selecting one educational technology that became significant that year.
I’ve covered AI in a few previous entries, but this year’s entry returns to it I’m afraid, and namely the rise of the term and the content it describes – AI slop. The term AI slop initially referred to that ridiculous artwork of Jesus and prawns (to be fair, these are weirdly quite funny), but can be broadened to encompass all AI generated content that is of low value. AI slop is a great term, although it’s not clear who came up with it. White supremacist Steve Bannon boasted of their policy of combat the media by “to flood the zone with shit.” Just generate outrage, confusion and distraction to the degree that the truth gets lost, or ceases to matter. Well, now AI can ramp up that shit-flooding to biblical proportions. In ed tech, Michael Barber’s fantasised Avalanche may actually be coming, but it will be an avalanche of slop, a tsunami of shit. Amazon is being flooded with AI generated books, reddit bots and Facebook swamped with AI posts, Google search diluted by dodgy advice. Wikipedia tries to stand firm.
The metaphor of ecosystem is one that is over-used but it works here I think. Our information ecosystem may be much less robust than we think. It’s not that AI generated content is bad, or incorrect, necessarily, but rather that it is just bland and often useless. As anyone who has seen an AI generated Facebook post will know, with either an image proclaiming to be real, or a post analysing something with obvious errors, it drains attention. It’s the quantity that becomes difficult to combat, and this is where the ecosystem analogy comes in. Rabbits are not particularly harmful individually, but when introduced into Australia, they bred like, erm, rabbits, and overran the local ecosystem. Attempts to control the flood of AI content and protect our information spaces are likely to be as effective as the famous rabbit proof fence. It’s not unrealistic to imagine the internet being similarly overrun by AI slop and us humans edged out.
In education the issues are numerous. Academics have barely accepted the use of Wikipedia by students, how are they going to cope with buckets of slop? Students will have more of this stuff to wade through, they will use tools to generate content that just about meets assessment needs. University policies and boards spend their time combatting and policing the use of this stuff. Journals are inundated with AI generated papers. Given that the great claim of AI is that it improves efficiency, all this extra work that it generates doesn’t seem to be taken into account. “Fighting AI slop” should become an entry in work-planning so we can record just how much time is spent doing this.
There is some evidence that AI improves performance at the lower end but lowers creativity overall. That can be expanded to encompass its impact on education overall. So, yeah, 2024 was when we really began to see the impact of AI on our information ecosystem, and become aware of the potential long-term damage.
There are times when being proved right is the worst thing you can imagine… In the run up to the US election I had lots of conversations with my daughter, who studied US politics. She thought Harris would win based on proper rational analysis. I thought Trump would win based on a nasty feeling.
My rationale was this – the US hasn’t gone deep enough into the crap yet for there to be a consensus that Trumpism is a bad thing. Now, don’t get me wrong, it really should have come to that conclusion, but when you see that even the Jan 6th insurrection is not sufficient to stop Trump running again, then you know that not enough people have had that realisation. In the UK we finally, finally got rid of the Tories this year. But it took fourteen years and a lot of crap for the mood to swing sufficiently. Brexit wasn’t enough, there were still people who convinced themselves it would be good if only, you know, we did it properly. It took a Johnson premiership, weekly scandals, partygate and Liz Truss for chrissakes, before the British people finally turned.
The US is typically a nation of extremes, and so I fear it will take more than this before they reach the bottom from which they rebuild. Who knows, maybe there is no bottom here. A large proportion of the American public never really came to terms with the Civil War and agreeing that slavery was a bad thing. Unlike Germany and Japan, who after WW2 rebuilt themselves anew with a firm focus on not becoming that sort of nation again, the lack of reckoning in the US has lingered. Trumpism is its consequence. I fear there is a long way down yet for the US (and the world) before there is a universal acknowledgement of the wrong direction. God help us all.
[Update: I don’t think I made it clear enough that I’m talking about the official comms channels of universities here, not individual academics. They should have left X ages ago.]
I’m not the first person to advocate this, but the timing and the case for it now seems even stronger. UK universities (but all HEIs really), need to get off X/Twitter as an official platform now. I have a lot of respect for colleagues in Comms, and they are balancing many different factors. It’s easy for people like me to say it, but much more difficult to undertake as an institutional policy. I get it, but now is the time.
We have seen X/Twitter transform radically. It has gone through three phases with regards to dangerous behaviour I think:
The trade-off many of us made between the good stuff and the bad stuff was justifiable at stage 1. Lots of individual academics left at stage 2. You could justify staying at this stage by arguing to was to combat misinformation, to dilute the anger, that’s where your audience is or simply that it was irrelevant to your goal and brand. Stage 3 is a very different beast however, it is explicitly an ideological platform now. And unless your ideology aligns with that, then maintaining an account there is actively supporting it. I would hope that most university’s ideology does not align with that of Musk.
What’s more, now that Musk has more political power, it’s highly likely that he will enact that through X. This may be removal of words he doesn’t like (eg decreeing that cisgender is a slur), downgrading criticisms of people and causes he promotes, upgrading the views of those he does, etc. Timothy Snyder has 20 lessons from tyranny, and the first is “Don’t obey in advance”, stating:
Do not obey in advance. Much of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then start to do it without being asked. You’ve already done this, haven’t you? Stop. Anticipatory obedience teaches authorities what is possible and accelerates unfreedom.
This applies to social media – don’t start curbing what you want to say in advance. And you can’t do that on X. So, I’m sorry comms colleagues, it’s time to get off that particular horse.
We went on holiday to Crete this month, and hired a car to do some trips including to the ex-leper colony of Spinalonga, (pictured) and inland villages. I already miss the sunshine and the food.
As I mentioned in my previous post, I’ve been doing actual work this month, writing a report. It’s involved, inevitably, researching AI in education. With all the hype it was interesting to find lots of thought pieces about potential uses, reviews of how students are using it, but very few actual case studies of it being applied in education. And often it was an extension of existing practice, such as learning analytics, with a layer of AI slapped on it. Also this month the use of the term “AI slop” has come to prevalence as we begin to see this type of meaningless content invade social media. It seems the AI bubble is about to burst, and the label “powered by AI” may soon be seen as something to avoid. This was inevitable I guess (you’ll acknowledge how I’m not mentioning hype curves here), and we’ll probably see it settle into a more normal mode soon. Anyway, I’m not convinced by this whole “work” thing.
(saw this meme online somewhere which made me laugh)
As if Musk going full on fascist promoter wasn’t bad enough, it turns out there has been some dodgy behaviour in the WordPress world. It turns out being ‘open’ isn’t any protection against tech bros being tech bros.
Books
I enjoyed Sarah Rose’s account of Robert Fortune’s theft of tea from China to set up the East India Company’s Indian tea plantations. She refers to it as industrial espionage on the grandest scale, and is full of details I wasn’t aware of, for instance the invention of terrariums (or Wardian cases as they were known) weren’t just nice for growing plants indoors, but revolutionised global trade by allowing new plants to be grown in different countries in the empire. The book has also set me off trying different teas.
Another enlightening read was Sabrina Imbler’s My Life in Sea Creatures: A young queer science writer’s reflections on identity and the ocean. She takes a different sea creature for each chapter and uses it as a metaphor for aspects of her life, such as her racial identity, finding queer clubs, relationships with men, etc. In lesser hands it could be an approach that fails on one aspect or the other – either the marine biology or the autobiography element dominates but she balances them both beautifully and the metaphors really work as a lens of interpreting her own life. Maybe I should do one in terms of educational technologies: “Chapter 1, why I’m boring like a VLE”.
As we were on holiday, I read a lot of horror this month (ideal poolside reading, right?). If that’s your bag, I recommend CJ Tudor’s novel take on vampires (which is difficult to manage), Stephen Graham Jones continues to explore the slasher theme with his latest book, and Paul Tremblay’s Horror Movie riffs off the myths surrounding exploitation movies in the weirdest way.
Vinyl
Ezra Collective’s new album Dance, No One’s Watching continues to showcase the vibrant UK jazz movement, and adjacent to this was Baby Rose’s ep teaming up with Canadian jazz trio BadBadNotGood. I was on something of a smooth jazz vibe this month, as it also saw the reissue of Sade’s back catalog, so I picked up some I didn’t own including her best work, Love Deluxe. Anyway I’m off to drink some Oolong tea and embrace the smooth sounds on vinyl.
I wrote four posts under the theme of “Things I was wrong about”, so I thought I’d reflect on that mini-series. Firstly, a few people commented that it was unusual to see people talking about where they were wrong. So much of academia, and ed tech in particular I suspect, is populated by “See, I told you so!” claims. It’s perhaps not surprising that people don’t talk about times they were wrong, after all much of your career is a reputation business. As I mentioned in the first post, it’s probably no coincidence that I wrote this series after I had taken semi-retirement and reached a certain stage in my career. Admitting to past errors is a luxury afforded to those in my position.
But I also think there is a certain sunken cost fallacy in ed tech, where there is a reluctance to admit that after all that time, money and hype, a lot of it didn’t amount to much. MOOCs are a good case in point, I like MOOCs, I take one every now and then, and I think they contribute positively to the global knowledge base in a time of massive misinformation. But they didn’t come near to repaying all the hope and investment in them. But we don’t want to think about that because it may mean people are more skeptical about our next big claims (hello AI), so let’s just move on.
Writing the posts was an interesting process personally though. Some themes emerged, such as you can be right for a certain period, but then things change and many of those early assumptions are no longer applicable. Things I got wrong were often allied to a certain degree of optimism and idealism, indeed naiveté, which, sadly demands a more cynical perspective going forward. I think it’s also true that I was guilty of something people do a lot in ed tech, even though we proclaim the value of research, which is projecting out from my own experience.
In general though I feel it would be positive to reassess mistaken beliefs and views we held with regards to ed tech on a regular basis. This needs to be done in a safe way, I don’t want anyone to replicate the Humiliation game in David Lodge’s Changing Places whereby an English Lit Professor wins the game by admitting he has never read Hamlet, but loses his reputation in the process. The point is not that any one person was wrong, we need to acknowledge that is common, but the reasons why you were wrong, and also why you felt you were right, in terms of the technology (not your personality) and how the context panned out. This kind of knowledge might be important when it comes to implementing future technology. Hey, I’ll come and convene “Things We Were Wrong About” sessions for you and we can all reminisce about wikis/BBS/SecondLife/cMOOCs afterwards.