I don't want to be a hater, but exposing access to your homelab through a "fully vibe coded" application (it's mentioned at the bottom of the README) is probably not a good idea.
I guess at least they're being honest, but I would agree - there's a large delta between Al-assistance and Al-driven, and "vibe coding" is one step further (just accepting everything Al does without critique, so long as it "works").
Great for prototyping, really bad for exposing anything of any value to the internet.
Github should have "LLM" as language for repos that self report to be vibe coded or at least this kind of disclosure should be at the top of the readme not after thought.
Also the "If you're Anti-AI please don't use this." is pretty funny :D
I guess I must be "Anti-AI" when I think this kind of code is wild to rely on.
I fully support the AI self-disclosure, but what I wonder what it is about AI generated code that makes this a separate problem from any other code where you don't know the programmer's competence?
Is it because the AI can generate code that looks like it was made by a competent programmer, and is therefore deceiving you?
But whatever the reason, I think that if we use it as a way to shame the people who do tell us then we can be assured that willingness to disclose it going forward will be pretty abysmal.
I think it makes sense for stuff that is fully AI generated to the point where you commit the prompts to git. At that point, they become the real "source code" and the generated code is more of a build artifact. It makes sense to tag the language as "LLM" instead of e.g. "Python" because that's what contributors will be expected to touch when interacting with the codebase.
there is a non-zero chance that the human programmer has an interest in producing correct, secure code. there is zero chance than an LLM has the same interest. maybe those two are closer together in some cases, but not in many others.
LLMs and Humans fundamentally write different kind of code.
As humans we segment functionality and by nature avoid extra work as much as possible. Meaning reading someone else's code even if they are less competent makes sense and you can see the intention.
With LLM code everything is mixed together with no rime or reason and unless separately specified old useless functionality won't be cleaned out just because it is no longer used.
Also just the fact that people who use LLMs to vibe code bigger things usually aren't capable of reviewing what is going on in the first place, but if you are dangerous enough yourself to write a bigger piece of software you probably do know something about the problem on a deeper level and can test it.
I don't really see shaming. If you vibe code something and you are proud of it good for you, but LLMs currently are not capable of creating good software.
I must be Doing It Wrong(TM), because my experience has been pretty negative overall. Is there like a FAQ or a HOWTO or hell even a MAKE.MONEY.FAST floating around that might clue me in?
I guess I have to implement the habit of checking such things, since I never assume such a possibility. I prefer this info to be at the top of the readme, though – much more information value than the logo that deceived me into thinking this is a mature project.
Regardless; what benefits this would have over Wireguard?
Perhaps not requiring a wireguard client installed on the machine you are accessing from. There are several circumstances where installing a VPN client isn't possible or practical
It’s getting scary how many security related apps are being vibe coded by people with very little security experience (not a knock heh on op, they could very well be experienced).
I'm pro security. The gall to put something out there, pretend it being vibe coded is not a big deal and possibly exposing hundreds of people to security issues. Jesus.
I mean you are free to not use it, it's for personal use. I was annoyed by all the vpn based solutions and built knocker to have something that works without installing it on each and every device.
It's open source. Audit it like you would any other service that exposed your homelab to the Internet. How do you know XYZ repo isn't coded for some bootcampers capstone project? I bet those are even less secure.
Edit: should have mentioned I am a bootcamp grad, not just throwing random shade.
If I had to audit security services for exposing homelab to the internet, I wouldn't use those services in the first place. I'm fine trying things out, but this is a very important security boundary, and it's a solved problem. Why risk it with an auditor who does it for a hobby (me)?
I mean it's just using firewalld.
You can't inspect the rules.
For me it's simple enough that it shouldn't be a big security issue, but I understand and that's why I wrote that in the readme.
We should start a movement among personal-blog people, where each personal website is encouraged to attach 1 to 5 (to not overwhelm) link to other personal websites they find interesting.
That way, if the chain is solid enough, we can bring back the notion of "surfing the internet"
Hi everyone! I recently did a small project for myself where I joined together my home network and home network at my parents house where I spend a lot of time. I basically wanted the devices in both networks to be able to talk to each other as if they were in a single network.
I am in no way a specialist when it comes to networking, so please let me know if there is something I missed here or could do better
The idea itself sounds fun though