As soon as everybody is paying spot prices, balcony power stations are not economically viable anymore. Even today, on a sunny day, spot prices for electricity are either very low or even negative. The more solar power is available, the lower these prices will be. So your balcony power station is replacing electricity you could get for free anyway. At night, when you are not producing electricity, you still need to buy the expensive electricity from fossil plants.
The reason why personal solar installations are profitable is that you can buy electricity for fixed prices from your local power company. You pay the average of the vastly different low (or negative) prices during the day and the extremely expensive prices on windstill nights. Solar allows you to use your own electricity when the average is below spot prices, and get power for much less when the price you pay is cheaper than spot prices. It's like a state-approved scheme to play the market in the name of decarbonization while actually increasing everybody else's prices and possibly even CO2 emissions.
Which is never, because even then you are still paying some sort of taxes on top of the spot prices and also network fees.
The price of electricity from the network also has to include the price of delivery, while homemade electricity only has to recoup initial investment.
Of course this means given enough home installations (in places with enough sun) the price of electricity from the network will rise, more people will install their own stations, some will even disconnect, rinse and repeat. I read somewhere this exact situation is already playing out already in Pakistan.
There are various good websites for showing the UK generation mix, but pricing seems less public. A lot seems to be done on day-ahead, which is pricing for the whole day not minute by minute. Is there a minute-by-minute ticker? Tariff?
(the reason I'm asking is that I'm skeptical as to how true this is for places that aren't California)
You can see spot prices at the top of grid.iamkate.com for example.
It would be nice to have some belated insight into how the bids look. Like maybe a few random hours released from a week ago?
Oh, and it's half hours. You can't buy or sell five minutes of electricity, just half hours, which is why your smart meter also thinks in half hours. 48 periods per day.
Aha - that led me to https://bmrs.elexon.co.uk/system-prices , which shows that for the last week prices have been hovering in 80-180 range, and there was only one period of negative pricing during the day.
Wow, £100 per MWh and 12% is fossil fuels in the mix at 10:48am ... a bit more Solar adoption and maybe that 12% could go away, it's morning after all.
To me this illustrates that with renewables (solar and wind) the key is storage. You want to grab all you can during excess production/very low prices periods and then use that for the rest of the day.
You can do exactly that by buying battery packs but (1) they are more expensice pieces of kit than solar panels and (2) capacity and output of DYI/plug in systems is very limited.
A quick check online also says that (in the UK) peak spot prices are usually 7am-10am and 5pm-9pm, which are basically when demand picks up or hasn't dropped yet while solar panels are useless...
> You want to grab all you can during excess production/very low prices periods and then use that for the rest of the day.
Batteries help, but even that is limited in northern countries like the UK. If you look at the data, in July '25, solar produced 2.36 TWh. But in December '25, it was only 0.535 TWh: the output in summer is >4 times the winter output. So either you need to discard 75% of the electricity produced in summer, or you need truly gigantic batteries that store power produced in summer for winter. Both is not economical. Solar is far less efficient in the UK than in, for example, Florida.
In the UK wind contributes more to the grid that solar (not unexpected). Overall the issue with either or both is still that production varies widly over time including within a day.
With solar specifically you have the obvious day/night cycle, which makes storage required to make the most of it.
This is why smart meters are important to providers, they can more accurately model the spot pricing adjustments which means that you actually use LESS fossil fuels. Also most new meter installs support bi-directional metering
Over the last 12 months, AI agents have become dramatically better. And in the last 3 months, they have reached a point where, with some light guidance, they can write 100% of the code. Most skeptics have been convinced and are now realizing the impact. That's what you see in the stock market.
I don't know where the ceiling is. And how much of the improvement was due to better context engineering, and how much to better models. I would expect the context engineering to plateau very soon. Not sure about the models.
An even more dramatic change for the whole economy will be when non-IT, non-creative office clerks are replaced. This is mostly a matter of redesigning the interfaces around them. AI could probably do already most of the work, but getting the tasks to the AI, using their output, and communication with third parties are still a major challenge. Like someone processing insurance claims. AI needs a way to get the claim, to contact third parties (write emails to humans, communicate with other AI agents, maybe even call humans), and then to initiate the payout. It's already doable with today's technology, but still a lot of work.
True, but junior developers used to provide a lot of value while doing this. Now their value, while they are still figuring it out, has gone down immensely. For a company, there is no value in letting a junior dev write code anymore. And for reviewing the AI output, you need someone more experienced.
The ST had some awesome productivity programs. Tempus Word, Papyrus, Calamus...
All running on a 8 Mhz computer with 1 or 2 MB, but with feature sets that do not need to hide from today's software.
People doing DTP with Calamus on their Ataris stuck around for a long time after the systems weren't used for much else – MIDI tooling excepted, of course.
On the other hand, there you didn't have that many powerful packages on any system, besides Quark & the various Adobe tools du jour everything paled in comparison.
For word processing, being forced to use Word was/is usually worse than for DTP, though. But feature-wise, everything seemed to converge during the 90s, so "having" to use Word instead of e.g. WordPerfect was less and less of an issue.
With some exceptions of course, most famously GRRM and other people who got into things very early sticking with the first thing they learned (i.e. WordStar), or apparently some journalists being really into XyWrite.
It's not surprising that people who write professionally would learn one tool to the point it gets out of the way and then not want to change. It's not just sticking with the first thing they learned - there's a constant churn of "tools for distraction-free writing" that address some of the complaints that people that still use older word processors have about more up-to-date systems.
Once you know the pattern, every so often you'll see a piece about a writer or journalist and the funky software they use and you can just wait for it... it's going to be Wordstar, XyWrite, one of the XEDIT editors, sometimes Wordperfect for DOS. Rarely Word for DOS. Neal Stephenson uses emacs, but he's an outlier in a lot of ways. I think there was a piece linked here recently by a journalist who uses the macOS TextEdit for note-taking, which dates back to NeXSTEP. (not exactly the same thing, but consider)
Late 1990s supposedly a considerable extension on use of Macs for DTP was that Quark could get significantly automated with AppleScript, and some publishing houses had non-trivial workflows done that way to reduce time spent on preparation.
Maybe it's wishful thinking, being one of the SaaS-developing developers he describes. But I think that only the complexity required for a SaaS is increasing. You certainly can't earn millions with the kind of SaaS that used to take a week or two, and can now be done on a weekend. So I am trying the kind of SaaS that I never dared to start, knowing that it would take a year or two of my spare time. And with AI agents, I now hope to complete it in 3 or 4 months, with a lot of extra features I would never have dared to include in an MVP.
Mainly, a friendly and simple UI. Feedly looks like it hasn't gotten much love recently. Inoreader is too cluttered for my taste, though it has a feature set I can't match any time soon.
I have plenty of other ideas for what to build on top of it: offering an SDK and APIs so you can vibe-code the UI you want, a built-in podcast listener, using news from aggregated feeds to build a personalized AI feed. But the first step is to reach the Google Reader feature set minus social features.
I think you're in a tough market, but I'll agree that Feedly hasn't gotten much love, and is clearly aiming for a more enterprise market.
API access is worth chasing. There was something I wanted to do with Feedly (I've already forgotten what it was) but once I saw their APIs were hidden behind some enterprise level plan, that was the end of that. If we're in a world where everyone has a personal AI agent, giving their agent an API key to their RSS sync account... that might have some interest.
Feedly seems hostile to third-party client access (ie mobile & desktop apps), so being friendlier towards RSS clients could be of interest.
Personalized AI feed is a good idea but you don't have all the personalized year of context that my Claude does. My AI agent is (probably) going to do a better job of choosing the most relevant stuff.
And personally, less interested in podcasts in my RSS app. That's something for Pocket Casts / AntennaPod. I like my audio separate from my RSS. But that's me.
> I think you're in a tough market, but I'll agree that Feedly hasn't gotten much love, and is clearly aiming for a more enterprise market.
Yes, enterprise is certainly where the money is (Feedly's plans start at $1600/month...), but as a solo dev working on a side-project, that's not an accessible market for me anyway. So I try to create a service that's simple and cheap.
> My AI agent is (probably) going to do a better job of choosing the most relevant stuff.
The idea would be basically: the feed reader know the user's interests because of the subscriptions, and knows the last time the user logged in. So it can filter what happened since then; it can also order the posts by relevance, allowing the user to catch up.
And in a second step, an agent could even write the posts dynamically, summarizing information gathered from the user's feed, possibly even adjusted to the user's level of knowledge and offering background info where needed.
> And personally, less interested in podcasts in my RSS app. That's something for Pocket Casts / AntennaPod. I like my audio separate from my RSS.
There are some feeds that are more like a mixture of text and podcast. I usually read only the text, but sometimes it catches my interest and I want to listen to one or two posts. That's when I start hating the lack of podcast support in Feedly.
I'm using (self-hosted) Nextcloud News, what would your... service? Tool? Product? ... offer beyond what NN does? It is quite simple as well, offers an uncluttered interface, keeps my subscriptions as private as RSS subscriptions can be. I suspect you're targeting a different market from the one catered by self-hosted services like Nextcloud?
I am not familiar with Nextcloud News. In the first version, it probably won't offer much for you, besides having a catalog of feeds, the ability to search them, and subscribe with one click, which is usually not offered by non-cloud RSS readers.
For people who do not want to use self-hosted services (which generally includes me), it offers simplicity. Open the page, choose Google as auth provider, confirm, and you will get a friendly start page. Click on 'follow' on one of the feeds, and you can start reading immediately. The UI is more like Facebook or X, so basically, you just need to scroll. Either in a feed of your choice, or all your feeds. It's designed to work well on small mobile screens, tablets, and desktops, with great keyboard support on the latter. Larger screens use two or three columns.
Tough market. What’s your differentiator over Readwise? They are crushing it on the “power user feed reader”.
Best of luck though, I think this is a very promising space. (But my bet is you can do all the interesting stuff in vibe-coded thin UI + OSS pipeline.)
Simplicity. I can get you reading your first feed in under a minute. Also, I am not really thinking about monetization right now, but I am building a feed reader I want to use. I wouldn't want to spend $13 a month for it.
> thin UI + OSS pipeline
No, the UI isn't that thin. I am optimizing it to minimize my costs for operating it. Everything I can do inside the client is done inside the client. Interactions with the server are mostly limited to polling every 2 minutes for feed updates, and sending read markers after 3 seconds of inactivity. Feed data is stored on CDN, compressed.
The kind of clothes we're talking about are not regular clothes. It's the unsellable kind. When H&M is doing a big sale, order the clothes by price, lowest price first. You will find stuff so hideous that they can't even sell it for four bucks. That's what I would expect most of the disposed clothing to look like.
"Agents should work overnight, on commutes, in meetings, asynchronously."
If I read stuff like that, I wonder what the F they are doing. Agents work overnight? On what? Stuck in some loop, trying to figure out how to solve a bug by trial and error because the agent isn't capable of finding the right solution? Nothing good will come out of that. When the agent clearly isn't capable of solving an issue in a reasonable amount of time, it needs help. Quite often, a hint is enough. That, of course, requires the developer to still understand what the agent is doing. Otherwise, most likely, it will sooner or later do something stupid to "solve" the issue. And later, you need to clean up that mess.
If your prompt is good and the agent is capable of implementing it correctly, it will be done in 10 minutes or less. If not, you still need to step in.
> I wonder how our comments will age in a few years.
I don't think there will be a future where agents need to work on a limited piece of code for hours. Either they are smart enough to do it in a limited amount of time, or someone smarter needs to get involved.
> This can't be a serious project. It must be a greenfield startup that's just starting.
I rarely review UI code. Doesn't mean that I don't need to step in from time to time, but generally, I don't care enough about the UI code to review it line-by-line.
> I wonder how our comments will age in a few years.
Badly. While I wouldn't assign a task to an LLM that requires such a long running time right now (for many reasons: control, cost etc) I am fully aware that it might eventually be something I do. Especially considering how fast I went from tab completion to whole functions to having LLMs write most of the code.
My competition right now is probably the grifters and hustlers already doing this, and not the software engineers that "know better". Laughing at the inevitable security disasters and other vibe coded fiascos while back-patting each other is funny but missing the forest for the trees.
We don't have enough context here really. For simple changes, sure - 10min is plenty. But imagine you actually have a big spec ready, with graphical designs, cucumber tests, integration tests, sample data, very detailed requirements for multiple components, etc. If the tests are well integrated and the harness is solid, I don't see a reason not to let it go for a couple hours or more. At some point you just can't implement things using the agent in a few simple iterations. If it can succeed on a longer timeline without interruption, that may be actually a sign of good upfront design.
To be clear, this is not a hypothetical situation. I wrote long specs like that and had large chunks of services successfully implemented up to around 2h real-time. And that was limited by the complexity of what I needed, not by what the agent could handle.
To be fair, for major features 30m to an hour isn’t out of this world. Browser testing is critical at this point but it _really_ slows down the AI in the last 15% of the process.
I can see overnight for a prototype of a completely new project with a detailed SPEC.md and a project requirements file that it eats up as it goes.
10 minute is not the limit for current models. I can have them work for hours on a problem.
Humans are not the only thing initiating prompts either. Exceptions and crashes coming in from production trigger agentic workflows to work on fixes. These can happen autonomously over night, 24/7.
> 10 minute is not the limit for current models. I can have them work for hours on a problem.
Admittedly, I have never tried to run it that long. If 10 minutes are not enough, I check what it is doing and tell it to do what it needs to do differently, or what to look at, or offer to run it with debug logs. Recently, I have also had a case where Opus was working on an issue forever, fixing one issue and thereby introducing another, fix that, only for the original issue to disappear. Then I tried out Codex, and it fixed it at first sight. So changing models can certainly help.
But do you really get a good solution after running it for hours? To me, that sounds like it doesn't understand the issue completely.
Sometimes it doesn't work or it will give up early, but considering these run when I'm not working it is not a big deal. When it does work I would say that it has figured out that hard part of the solution. I may have to do another prompt to clean it up a bit, but it got the hard work out of the way.
>or offer to run it with debug logs.
Enabling it to add its own debug logs and use a debugger can allow it to do these loops itself and understand where it's going wrong with its current approach.
I can think of one reason for letting agents run overnight: running large models locally is incredibly slow or incredibly expensive. Even more so with he recent RAM price spikes thanks to the AI bubble. Running AI overnight can be a decent solution to solve complex prompts without being dependent on the cloud.
This approach breaks the moment you need to provide any form of feedback, of course.
> I don't know what it is, but trying to coax my goddamn tooling into doing what I want is not why I got into this field.
I can understand that, but as long as the tooling is still faster than doing it manually, that's the world we live in. Slower ways to 'craft' software are a hobby, not a profession.
(I'm glad I'm in it for building stuff, not for coding - I love the productivity gains).
Generally, my stance is that I add more value by doing whatever ridiculous thing people ask me to change than waste my time arguing about it. There are some obvious exceptions, like when the suggestions don't work or make the codebase significantly worse. But other than that, I do whatever people suggest, to save my time, their time, and deliver faster. And often, once you're done with their initial suggestions, people just approve.
This doesn't help all the time. There are those people who still keep finding things they want you to change a week after they first reviewed the code. I try to avoid including them in the code review. The alternative is to talk to your manager about making some rules, like giving reviewers only a day or two to review new code. It's easy to argue for that because those late comments really hinder productivity.
The reason why personal solar installations are profitable is that you can buy electricity for fixed prices from your local power company. You pay the average of the vastly different low (or negative) prices during the day and the extremely expensive prices on windstill nights. Solar allows you to use your own electricity when the average is below spot prices, and get power for much less when the price you pay is cheaper than spot prices. It's like a state-approved scheme to play the market in the name of decarbonization while actually increasing everybody else's prices and possibly even CO2 emissions.
reply