7 minute read

Weeknotes are where I share what I’m working on / thinking about this week and a few things to share without worrying too much about the ideas being fully formed.

Thinking about: How am I thinking about GenAI right now?

I think AI is a genuinely interesting and useful tool. That’s controversial among many of my peers, but in my experience it’s true. That said, I think AI is mostly interesting and useful for experts (and possibly emerging experts) working within their areas of expertise. This is really because you need to understand the specific issues and language of an area to meaningfully prompt within that area and get interesting responses. You also need to be able to use your judgment to know when AI is giving you something useful or true.

I do not think it’s very useful for novices, at least from what I have seen.

I also think it’s primarily useful for enhancing things you create, not for creating things on its own. It’s a little hard to describe what I mean here, but I would start by saying that I’m not at all interested in creating things with AI, but I am interested in using AI to create better things. And I think there is a real question about if this technology can ever be said to be useful for creating meaningful things on its own or from short prompts.

I also don’t think AI is a time saver. I probably spend more time when I’m using AI, though I think I often end up creating something better.

There are exceptions to the above. The ability to create fully functional code by prompting and troubleshooting is really fascinating and I’ve built a number of genuinely useful and interesting (for me at least) scripts and short programs for my own purposes.

That said, the code I’m creating is for things I am knowledgeable about, so even though I think coding could be an exception to the general rules above, it does seem to work best when you know the content and purpose and just need a little code to make it work.

Institutional Guidelines

I’m on a new working group at my institution that is tasked with putting together teaching and learning guidelines related to AI. We haven’t even met yet, but I think my biggest concern is that we avoid using AI in any way that could directly impact individual students.

What I mean by that is that I think we need to keep AI out of spaces where it could either punish students or direct their trajectory. Punish students by applying institutional sanctions to them, for instance in academic integrity situations, or direct their trajectory by, for instance, advising them to take a particular program or class. My concern here is that either the implicit biases within the AI models will have a negative impact, or AI systems will push students towards outcomes that benefit our institutions, for example by pushing them towards programs with higher returns.

Student advising and academic integrity should remain firmly in the hands of actual people who can explain their decision-making and be accountable for their actions. I think this will be challenging because these areas are precisely where institutions may see AI as a way to cut costs in the immediate future.

Related to this: I think we need to ensure that any decision made by an AI or with the help of an AI can be traced back to a real person who is ultimately accountable for that decision. Which means we need to talk a lot about transparency and how AI is being used.

And it is being used. By students at least. Many faculty on the other hand are not and seem stuck in an understanding from about two years ago in terms of what the capabilities are. There is a lot of fear, uncertainty, and hesitancy and almost no time to meaningfully engage.

Which is unfortunate because instead of faculty within disciplines exploring AI and determining what should be done about it—or even if anything should be done about it—in relation to their disciplinary expertise and values, there is a vacuum that is being filled by the most negative and most optimistic voices. Generally, the negative voices are sidelined, so the optimists have the floor.

And again, this brings me back to this idea that AI is most useful for people who are using it either in their area of expertise or an emerging area of expertise. So for me, the interesting space right now is how to engage with actual disciplinary experts and emerging experts.

How do we get people to think of themselves as having an expertise and start exploring GenAI from that place, while still remaining skeptical? How do we support those building an emerging expertise while recognizing the perils of doing so? How can we prevent the foreclosure of exploration and decision making that says, on one hand, never, and on the other hand, everywhere?

Which leads to my final (current) thought about all this: I would prefer to see most decisions made by departments and disciplines and individual faculty members, not institutions. Doing so leaves the door open to different approaches, including refusal and skepticism. Institutions have legal obligations that require we protect our students and staff, but the right place for many of these decisions is the discipline or individual faculty member.

Which leads to my final (current) thought about all this: I would prefer to see most decisions made by departments, disciplines, and individual faculty members. Doing so leaves the door open to different approaches, including refusal and skepticism. Institutions have legal obligations that require we protect our students and staff, but the right place for many of these decisions is the discipline or individual faculty member.

Sharing

Klaudia Jaźwińska and Aisvarya Chandrasekar in articles: We Compared Eight AI Search Engines. They’re All Bad at Citing News.

Premium models, such as Perplexity Pro ($20/month) or Grok 3 ($40/month), might be assumed to be more trustworthy than their free counterparts, given their higher cost and purported computational advantages. However, our tests showed that while both answered more prompts correctly than their corresponding free equivalents, they paradoxically also demonstrated higher error rates. This contradiction stems primarily from their tendency to provide definitive, but wrong, answers rather than declining to answer the question directly.

Alex Usher in articles: Why Education in IT Fields is Different

And so the lesson here is this: IT work is a pretty specific type of work in which much store is put in learning-by-doing and formal credentials like degrees and diplomas are to some degree replaceable by micro-credentials. But most of the world of work doesn’t work that way. And as a result, it’s important not to over-generalize future trends in education based on what happens to work in IT. It’s sui generis.

Simon Willison in articles: Here’s How I Use LLMs to Help Me Write Code

Over-confident is important. They’ll absolutely make mistakes—sometimes subtle, sometimes huge. These mistakes can be deeply inhuman—if a human collaborator hallucinated a non-existent library or method you would instantly lose trust in them. Don’t fall into the trap of anthropomorphizing LLMs and assuming that failures which would discredit a human should discredit the machine in the same way.

Henry Farrell in articles: Large AI Models Are Cultural and Social Technologies

Rather than being intelligent agents, Large Models combine the features of cultural and social technologies in a new way. They generate summaries of unmanageably large and complex bodies of human-generated information. But these systems do not merely summarize this information, like library catalogs, Internet search, and Wikipedia. They also can reorganize and reconstruct representations or “simulations” (1) of this information at scale and in novel ways, like markets, states and bureaucracies. Just as market prices are lossy representations of the underlying allocations and uses of resources, and government statistics and bureaucratic categories imperfectly represent the characteristics of underlying populations, so too Large Models are ‘lossy JPEGs’ (6) of the data corpora on which they have been trained.

New Socialist in articles: AI The New Aesthetics of Fascism

No amount of normalisation and ‘validation’, however, can alter the fact that AI imagery looks like shit. But that, I want to argue, is its main draw to the right. If AI was capable of producing art that was formally competent, surprising, soulful, then they wouldn’t want it. They would be repelled by it.

Categories:

Updated: