I have heard that term but need to read up about it. As I usually hear it in a derogatory reference, I have not been too rushed to learn more about it. ;)
It's the script kiddies of old going into ChatGPT and typing in a prompt to have it build some software.
They compile, run, adjust their prompt, which generates completely new code, that they compile, run, ...
very efficient and, as you pointed out, means you never get the same code (or code walkthru) twice. :/
very efficient and, as you pointed out, means you never get the same code (or code walkthru) twice. :/
That is exactly my first annoyance. I was thinking, surely this isn't how everyone else is using it... how do they put up with it?!
very efficient and, as you pointed out, means you never get the same code (or code walkthru) twice. :/
That is exactly my first annoyance. I was thinking, surely this isn't how everyone else is using it... how do they put up with it?!
One wonders. I did read an article today that suggested that, in general, it is better to ask an AI platform the same question ("prompt") at least twice if not three times.
I coded my Quantasia AI door to use
tokenized cache so it always remembe
the conversation. But that doesn't m
I coded my Quantasia AI door to use
tokenized cache so it always remembe
the conversation. But that doesn't m
Have you looked at all into connecting
to or interfacing with a self-hosted
LLM? And, have you read whether any of
them are better than others?
I kinda have an itch to install a self
hosted model for coding.
I coded my Quantasia AI door to use
Have you looked at all into connecting
to or interfacing with a self-hosted
LLM? And, have you read whether any of
them are better than others?
I kinda have an itch to install a self
hosted model for coding.
| Sysop: | smooth0401 |
|---|---|
| Location: | New Providence, NJ |
| Users: | 3 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 67:19:17 |
| Calls: | 338 |
| Files: | 691 |
| D/L today: |
37 files (6,303K bytes) |
| Messages: | 58,802 |