Welcome to the future
Welcome to the future
Welcome to the future
Don't copilot anything longer than a function of about 15 lines. That way you can quickly see if it made mistakes. Ensure it works, move on to the next.
And only do that for boring, repetitive work. The tough challenges and critical parts you're (for now) better off solving yourself.
Absolutely, I think the people who say it's completely useless for code are in denial
Definitely not replacing anyone but my god it has sped up development by generating code I already know how to write 90% of
No more having to look up "what was the for loop syntax in this language again?"
"Copilot is really good at things which I already know" and that is perfectly fine
I won't say copilot is completely useless for code. I will say that it's near useless for me. The kind of code that it's good at writing is the kind of code that I can write in my sleep. When I write a for-loop to iterate over an array and print it out (for example), it takes near zero brain power. I'm on autopilot, like driving to work. On the other hand, when I was trialing copilot I'd have to check each suggestion it made to verify that it wasn't giving me garbage. Verifying copilot's suggestions takes a lot more brain power than just writing it myself. And the difference in time is minimal. It doesn't take me much longer to write it myself than it does to validate copilot's work.
Tried to learn coding using chatGPT. Wanted to make my own game engine for a phone game. Ended up looking up tutorials.
If you are using "game engine" in the industry standard way, you would want to learn object oriented programming first, then learn how to use an existing game engine, and then MAYBE, in a long time, with a big team, build your own game engine.
ChatGPT as a programming tool like any other tool works a whole lot better when you are well versed in how the process should go. It speeds up the workflow of a professional, it doesn't make a new worker better.
Ai is great for finding small flaws or reciting documentation in a more succinct way. But writing new code and functions? That's a fools errand hoping it works out
I use it for writing functions and snippets all the time, at least in python and rust as long as you describe what you want it to do properly it works great
Example I used recently: "Please generate me a rust function that will take a u32 user id and return a unique RGB colour"
Generated the function, I plugged it in and it worked perfectly first time
To be honest yes. That is the sort of thing that sounds great. I have a little project I'm about to start so I'll take a look
I haven't been in development for nearly 20 years now, but I assumed it worked like that:
You generate unit tests for a very specific function of rather limited magnitude, then you let AI generate the function. How could this work otherwise?
Bonus points if you let the AI divide your overall problem into smaller problems of manageable magnitudes. That wouldn't involve code generation as such...
Am I wrong with this approach?
At that point you should be able to just write the code yourself.
The A"I" will either make mistakes even under defined bounds, or it will never make any mistakes ever in which case it's not an autocomplete, it's a compiler and we've just gone full circle.
The complexity here lies in having to craft a comprehensive enough spec. Correctness is one aspect, but another is performance. If the AI craps out code that passes your tests, but does it in really inefficient way then it's still a problem.
Also worth noting that you don't actually need AI to do such things. For example, Barliman is a tool that can do program synthesis. Given a set of tests to pass, it attempts to complete the program for you. Synthesis is performed using logic programming. Not only is it capable of generating code, but it can also reuse code it's already come up with as basis for solving bigger problems.
https://github.com/webyrd/Barliman
here's a talk about how it works https://www.youtube.com/watch?v=er_lLvkklsk
I tend to write a comment of what I want to do, and have Copilot suggest the next 1-8 lines for me. I then check the code if it's correct and fix it if necessary.
For small tasks it's usually good enough, and I've already written a comment explaining what the code does. It can also be convenient to use it to explore an unknown library or functionality quickly.
I told it to generate a pretty complex react component and it worked on the first try yesterday. It even made a style sheet. And it actually looks good.
It's so good when it works on the first try. But when it doesn't work it can really fool people with totally nonfunctional code. AI is the genie in the bottle where you really need to ask the right question.
Sloppy joes is the new spaghetti code
Hi ChatGPT, write code with no memory or logic errors to perform
<thing you want to do>
.I’m not sure how to talk to ChatGPT, I’m assuming like Siri.
LLMs are statistical word association machines. Or tokens more accurately. So if you tell it to not make mistakes, it'll likely weight the output towards having validation, checks, etc. It might still produce silly output saying no mistakes were made despite having bugs or logic errors. But LLMs are just a tool! So use them for what they're good at and can actually do, not what they themselves claim they can do lol.
I asked chat gpt to make me a business logo. It spelt the name wrong in every instance. Ok, keep that picture but replace the words with this exact spelling. Spelt it wrong a different way. Continued about 20 times before I gave up. It won't follow explicit instructions. I can't see it being that amazing at code