If working with AI has taught me anything, ask it absolutely NOTHING involving numbers. It’s fucking horrendous. Math, phone numbers, don’t ask it any of that. It’s just advanced autocomplete and it does not understand anything. Just use a search engine, ffs.
I asked my work’s AI to just give me a comma separated list of string that I gave it, then it returned a list of strings with all the strings being “CREDIT_DEBIT_CARD_NUMBER”. The numbers were 12 digits, not 16. I asked 3 times to give me the raw numbers and had to say exactly “these are 12 digits long not 16. Stop obfuscating it” before it gave me the right things.
I’ve even had it be wrong about simple math. It’s just awful.
Exactly. But they tout this as “AI” instead of an LLM. I need to improve my kinda ok regex skills. They’re already better than almost anyone else on my team, but I can improve them.
It’s really crappy at trying to address its own mistakes. I find that it will get into an infinite error loop where it hops between 2-4 answers, none of which are correct. Sometimes it helps to explicitly instruct it to format the data provided and not edit it in any way, but I still get paranoid.
Either you are bad at chatgpt, or I am a machine whisperer but I have a hard time believing copilot couldnt handle that, I am regularly having it rewrite sql code
What models have you tried? I used local Llama 3.1 to help me with university math.
It seemed capable of solving differential equations and doing LaPlace transform. It did some mistakes during the calculations, like a math professor in a hurry.
What I found best, was getting a solution from Llama, and validating each step using WolframAlpha.
Or, and hear me out on this, you could actually learn and understand it yourself! You know? The thing you go to university for?
What would you say if, say, it came to light that an engineer had outsourced the statical analysis of a bridge to some half baked autocomplete? I’d lose any trust in that bridge and respect for that engineer and would hope they’re stripped of their title and held personally responsible.
These things currently are worse than useless, by sometimes being right. It gives people the wrong impression that you can actually rely on them.
It was the last reaming exam before my deletion from university. I wish I could attend the lectures, but, due to work, it was impossible. Also, my degree is not fully related to my work field. I work as a software developer, and my degree is about electronics engineering. I just need a degree to get promoted.
Copilot and chatgpt suuuuck at basic maths. I ws doing coupon discount shit, it failed everyone of them. It presented the right formula sometimes but still fucked up really simple stuff.
I asked copilot to reference an old sheet, take column A find its percentage completion in column B and add ten percent to it in the new sheet. I ended up with everything showing 6000% completion.
If working with AI has taught me anything, ask it absolutely NOTHING involving numbers. It’s fucking horrendous. Math, phone numbers, don’t ask it any of that. It’s just advanced autocomplete and it does not understand anything. Just use a search engine, ffs.
I asked my work’s AI to just give me a comma separated list of string that I gave it, then it returned a list of strings with all the strings being “CREDIT_DEBIT_CARD_NUMBER”. The numbers were 12 digits, not 16. I asked 3 times to give me the raw numbers and had to say exactly “these are 12 digits long not 16. Stop obfuscating it” before it gave me the right things.
I’ve even had it be wrong about simple math. It’s just awful.
Yeah because it’s a text generator. You’re using the wrong tool for the job.
Exactly. But they tout this as “AI” instead of an LLM. I need to improve my kinda ok regex skills. They’re already better than almost anyone else on my team, but I can improve them.
It’s really crappy at trying to address its own mistakes. I find that it will get into an infinite error loop where it hops between 2-4 answers, none of which are correct. Sometimes it helps to explicitly instruct it to format the data provided and not edit it in any way, but I still get paranoid.
Either you are bad at chatgpt, or I am a machine whisperer but I have a hard time believing copilot couldnt handle that, I am regularly having it rewrite sql code
What models have you tried? I used local Llama 3.1 to help me with university math.
It seemed capable of solving differential equations and doing LaPlace transform. It did some mistakes during the calculations, like a math professor in a hurry.
What I found best, was getting a solution from Llama, and validating each step using WolframAlpha.
Or, and hear me out on this, you could actually learn and understand it yourself! You know? The thing you go to university for? What would you say if, say, it came to light that an engineer had outsourced the statical analysis of a bridge to some half baked autocomplete? I’d lose any trust in that bridge and respect for that engineer and would hope they’re stripped of their title and held personally responsible.
These things currently are worse than useless, by sometimes being right. It gives people the wrong impression that you can actually rely on them.
It was the last reaming exam before my deletion from university. I wish I could attend the lectures, but, due to work, it was impossible. Also, my degree is not fully related to my work field. I work as a software developer, and my degree is about electronics engineering. I just need a degree to get promoted.
Copilot and chatgpt suuuuck at basic maths. I ws doing coupon discount shit, it failed everyone of them. It presented the right formula sometimes but still fucked up really simple stuff.
I asked copilot to reference an old sheet, take column A find its percentage completion in column B and add ten percent to it in the new sheet. I ended up with everything showing 6000% completion.
Copilot is inegrated to excel, its woeful.