• A reminder that if you give a thumbs up or similarly positive reaction to a racist comment you may also receive a ban along with the user that wrote the post.

Economics Nerd Central

People have hyped GenAI too much. It's amazing, I use it dozens of times a day, help customers use it and even build services on it, but it is overhyped today.
Tbh I love it for doing the donkey work in programming.

Especially SQL which is a pain in the hole to write.

Point it at your code base and tell it to scaffold out a layout based on your conventions and style and it'll get a good bit of the way there.

And if you're good at naming shit and have a decent architecture it's predictive coding can be a very nice feature.

And then there's fucking cunting JavaScript. God bless it for knowing all the stupid dopey fucking idiosyncrasies.

I reckon your average full stack dev could seriously boost their productivity with Co pilot if they know what they're doing.

Best description I've heard is it's like a sowing machine. Fantastic if you know how to some useless otherwise.
 
I use copilot to write a coherent exec summary for tech reports, it's excellent for that. I tell it to summarise each section and not to use any info other than what has been written already.
 
I have exactly the same experience as all three of ye, depending on what I'm doing.

AS an example, using it to research some stuff for a thing that we're not sure has been done before, it turned up some useful information but all of the research paper links it provided were garbage.
Thankfully you know that because something like it is probably in the training data, taking the hallucinated papers as the basis for a search seems to have turned up pretty good results.
But as for being able to explore solving the problem from first principles? Useless.

I wrote my first bit of actually paid, delivered code to a customer earlier this year, it wasn't complex, just needed to be done and a junior could have done it, but I vibe coded my way through it so that it works and passes the tests. No way I'd have been able to deliver that as part of a project 3 years ago without a massive amount of training time and googling of stack overflow.

Found that with one project, implementing code against a standard, it cut about 66% of the effort, which is impressive.
As for a business model thingy I had to do, it was genuinely insightful in a way I hadn't thought of (and might make the client many, many <currency>)

But trying to build agentic workflows that are reliable, secure and better than simple RPA is hard. Most organisations grossly underestimate how hard it is.

So not saying it's all crap, absolutely not, but your mileage varies massively and the research suggests that it's not half as good as the hype.
 
Yeah I’m broadly in the same boat as a lot of ye, but I think a massive part of the variance people see comes down to process and operator proficiency, not just model capability.


My workplace has cursor rules that we all subscribe to, and between that and internal tooling, we can stop a lot of the bad stuff before it ever becomes an issue. Guardrails aren’t perfect, but they make a huge difference. The people getting the best results are the ones using AI inside a proper workflow, not just ad-hoc vibes prompting. One thing that was weird initially, but is now really useful is, using wisperflow, or voice prompts for these agents, the more specific you can be, the better the output, and the fastest way to get to that is to speak at the computer, and have it transcribe and run the prompt.


It really is shit in = shit out, and your ability to consistently get “good out” is driven by how proficient you are with the tools — same as any piece of technical kit. A junior with Copilot/Claude != a senior dev magically replaced. A good engineer with them can turn into a much faster good engineer.


As an industry it's definitely a bubble, but that's not to say people aren't making money from it, and it's not to say it isn't a useful tool.
 
Yeah I’m broadly in the same boat as a lot of ye, but I think a massive part of the variance people see comes down to process and operator proficiency, not just model capability.


My workplace has cursor rules that we all subscribe to, and between that and internal tooling, we can stop a lot of the bad stuff before it ever becomes an issue. Guardrails aren’t perfect, but they make a huge difference. The people getting the best results are the ones using AI inside a proper workflow, not just ad-hoc vibes prompting. One thing that was weird initially, but is now really useful is, using wisperflow, or voice prompts for these agents, the more specific you can be, the better the output, and the fastest way to get to that is to speak at the computer, and have it transcribe and run the prompt.


It really is shit in = shit out, and your ability to consistently get “good out” is driven by how proficient you are with the tools — same as any piece of technical kit. A junior with Copilot/Claude != a senior dev magically replaced. A good engineer with them can turn into a much faster good engineer.


As an industry it's definitely a bubble, but that's not to say people aren't making money from it, and it's not to say it isn't a useful tool.
One of the problems is that a senior dev with a good workflow is more productive than the same senior with a junior.

The problem is that senior Devs don't automatically know how to get the best out of it, and companies are concluding that they can cut the junior devs entirely.
In fact, a good junior dev who knows what they're doing with an ai dev workflow can, depending on the type of work, be more productive than a senior dev.

But it very much depends on the context*



*Pun intended
 
Yeah I’m broadly in the same boat as a lot of ye, but I think a massive part of the variance people see comes down to process and operator proficiency, not just model capability.


My workplace has cursor rules that we all subscribe to, and between that and internal tooling, we can stop a lot of the bad stuff before it ever becomes an issue. Guardrails aren’t perfect, but they make a huge difference. The people getting the best results are the ones using AI inside a proper workflow, not just ad-hoc vibes prompting. One thing that was weird initially, but is now really useful is, using wisperflow, or voice prompts for these agents, the more specific you can be, the better the output, and the fastest way to get to that is to speak at the computer, and have it transcribe and run the prompt.


It really is shit in = shit out, and your ability to consistently get “good out” is driven by how proficient you are with the tools — same as any piece of technical kit. A junior with Copilot/Claude != a senior dev magically replaced. A good engineer with them can turn into a much faster good engineer.


As an industry it's definitely a bubble, but that's not to say people aren't making money from it, and it's not to say it isn't a useful tool.
Here, the “prompt” was how many blood cells are in a millilitre of blood. It then told me a millimetre cubed was a millilitre.

A yahoo internet search in 1998 would give a better answer than that.
 
The most interesting thing about AI is realising what people considered intelligent.

It is very good at summarising corporate into corporate for corporate. I don’t trust it for summarising technical into corporate though.
I used copliot within O365, so it's not the free to use version that everyone has, perhaps that was more easily constrained, although I'm not sure why that would be. It did a good job though of taking a multidisciplinary report and creating the exec summary. Some of the chapters were very wordy, while others were numbers or modelling-heavy, and it coped with the lot.
 
Here, the “prompt” was how many blood cells are in a millilitre of blood. It then told me a millimetre cubed was a millilitre.

A yahoo internet search in 1998 would give a better answer than that.
You're working with a probabilistic neural net. It's not trying to give you the right answer, it's trying to give you the most probable answer.

So if you ask it a question which has a right or wrong answer and nothing in between, then it has a very good chance of being wrong.

Your prompts also need shaping, especially to set the context and sometimes the source data is just too poluted, it needs fine tuning to bring it into line with your use case. To give an example, was working with a water company doing anomaly detection on their network (mostly labelling of data...). One of the biggest problems they had was the vast, vast majority of training data is based on domestic plumbing pipes, which obviously is nothing like what they're working with.
So they had to do some training themselves of the model, which made their use case much more prominent in the data, making it more likely they get the right answer. Worked quite well in the end, certainly way better than their existing method.

And of course, out of the box, the model had absolutely no idea about their network anyway as it's all protected information. So even if you have it working with background documents or systems (e.g. RAG or MCP...) it still struggles anyway because the probabilistic neural pathways around your use case probably haven't been established within the model itself.


Fundamentally, if you have work where a clever graduate can come in and do the work reliably without needing briefing, then that's a good starting point for these sorts of AI model. If it takes 6-12 months to get their head around the job fresh from university, then maybe it'll be ok. If you need 20 years of industry experience then good fucking luck.
I used copliot within O365, so it's not the free to use version that everyone has, perhaps that was more easily constrained, although I'm not sure why that would be. It did a good job though of taking a multidisciplinary report and creating the exec summary. Some of the chapters were very wordy, while others were numbers or modelling-heavy, and it coped with the lot.
Some models are much better than others, I don't really rate GPT5 integrated into Copilot. They really limit the amount of processing its allowed to do, which makes it far less powerful.

Gemini 2.5 Pro, on the other hand, has impressed me.
here's its response to Kube's question without any "prompt engineering", I have absolutely no idea if it's right:
A millilitre of blood contains a staggering number of cells, with the precise count varying based on several factors, including gender and overall health. The vast majority of these are red blood cells, followed by platelets and a smaller number of white blood cells.

Red Blood Cells: These are the most numerous cells in the blood. In adult males, there are approximately 4.3 to 5.9 billion red blood cells per millilitre of blood.[1] Adult women typically have a slightly lower count, ranging from 3.5 to 5.5 billion per millilitre.[1] Another source states that women have about 4 to 5 million red blood cells per microliter, which translates to 4 to 5 billion per millilitre, while men have about 5 to 6 million per microliter, or 5 to 6 billion per millilitre.[2]

White Blood Cells: These cells of the immune system are far less numerous than red blood cells. A healthy adult typically has between 4,000 and 11,000 white blood cells per microliter, which is equivalent to 4 million to 11 million per millilitre of blood.[3][4] These cells make up approximately 1% of the total blood volume in a healthy adult.[3]

Platelets: These small cell fragments are crucial for blood clotting. Their numbers in a millilitre of blood are quite high, with a normal range being between 150,000 and 400,000 per microliter.[5][6] This equates to 150 million to 400 million platelets per millilitre of blood.[7] Some sources provide a slightly wider range of 150 to 450 million per millilitre.[7]

In summary, a single millilitre of blood contains:

  • Red Blood Cells: 3.5 to 6 billion
  • White Blood Cells: 4 to 11 million
  • Platelets: 150 to 450 million
It's important to note that these are average ranges, and the actual number of blood cells can fluctuate depending on various physiological conditions.
 
Back to economics.

Prompted by the BOE keeping rates on hold, been keeping an eye on the UK bond yields, because right now, the UK is paying a lot higher in interest payments than most equivalent countries.

Ireland's 30 year bond yield right now is 3.64%. The UK's is 5.2%

Ireland's debt to GDP ratio is now down to 40%, so debt interest payments are about 3.3% of total government spending.

For the UK, that's 8.3%.

To put it in perspective, if the UK had Ireland's level of debt interest spending, then the UK government would have about £60 billion more to spend than it does now, which would solve all of Rachel Reeves' problems.

If it had Ireland's bond yield, then I suspect the immediate pain would be somewhat lower. That interest rate is not necessarily representative of what the UK pays overall for its interest rate, it's the 30 year rate. The 10 year rate is 4.45%, but Ireland's is 2.89%

That 30 year rate has gotten a lot worse over recent years though, going from ~ 4.5% when labour took power to its current level:
1762431906268.png

Its a hell of a lot worse than 5 years ago:

1762431987166.png



As a result, the UK is in a bit of a bind. If it doesn't get its house in order, then that rate will climb, costing taxpayers more.

The term George Soros took from social science and applied to economics was Reflexivity (and a great example of why I don't think economics is a science).

Labour have set fiscal rules so that in theory, they end the "economic cycle" at the end of their time in parliament in surplus. These rules are supposed to give the bond market confidence.
Right now, the deficit is higher than they thought it would be at this point in the cycle. Forecasts of GDP growth are shite, spending is up, and those higher interest payments are making things much worse.
If they cut spending and/or increase taxes, then the budget deficit decreases, the bond market is happier, and interest rates reduce, which also reduces interest rates, improving the budget deficit.
Or maybe, if you cut spending and increase taxes, you damage growth prospects through negative fiscal multipliers, in which cases, your projections for future tax take drop, your budget deficit widens and the bond market punishes you for failing to create growth.

Or you say that in order to have growth, you accept a wider deficit and break the fiscal rules, the bond market goes along with it, interest rates decrease and your spending decreases, narrowing the deficit
Or the bond market doesn't, raises interest rates and makes your already worse deficit much wider, which will also knock business confidence, result in even steeper cuts to spending and taxes later on which further damages growth.


The fact that the attitude of the bond market to fiscal policy makes such a dramatic difference to the outcomes is why I think economics isn't a science. That's not really a mathematical model thing with falsifiability, it's far more of a vibes game. The maths is handy for analysing situations, but at the end of the day, it's all vibes that pulls at the big levers.

Anyway, one absolutely fun outcome of what you might think of as the lowest risk option, cut spending and raising taxes now, absolutely opens the door for populists and will be absolute poison at the next election, making a Reform government far more likely. Which is a far, far worse thing for growth and stability in the long term.

Which is nice.
 
What's On Today

Live Music

Ballads & Banjos

The Welcome Inn, What's On Today @ 9:30 pm

More events ▼
Top