NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
One in six US workers pretends to use AI to please the bosses (theregister.com)
threecheese 22 hours ago [-]
Didn’t read the article, and I’m an engineer who is using too many genai tools personally and professionally, but the title spoke to me: I am 100% pretending that my professional artifacts came from genai tools.

Part of it is that “what good looks like” - from leadership - looks a certain different way right now, thanks to LLMs. The other part is that knowing isn’t enough in a large org, you have to show.

We are embracing this SO HARD, I need my communication to look like this, and most importantly my teams communication needs to implicitly show that we are bought in (to accompany the explicit proof, measured in token kpis - not kidding).

juliangmp 15 hours ago [-]
> I am 100% pretending that my professional artifacts came from genai tools.

Jesus Christ, has it gotten that bad? I'm out of the loop since I use absolutely zero generative AI tools personally.

That's a line I'd never even consider to cross. Dont just sell your work off as someone, or rather something, else's. Take pride in what you do.

rchaud 9 hours ago [-]
> Dont just sell your work off as someone, or rather something, else's.

Bosses pushing for AI don't care about that. They just need to be able to tell their bosses that "AI is improving productivity". It's a KPI that's landed on their desk from senior executives who live in a world of PowerBI dashboards, CEO "fireside chats" and McKinsey hype mongering.

mcv 14 hours ago [-]
And not just because you're letting the AI steal your credit, but also because it misinforms management about the value of AI. If they think the best work comes from AI, they might want more AI, but if it actually comes from human craftsmanship, they should be investing in that instead.
9 hours ago [-]
naikrovek 10 hours ago [-]
> Take pride in what you do.

Hahah, that's great. This is 2025, man. Corporations abandoned the idea of letting people take pride in what they do long ago. Corporations have no room for slowpokes to take their time to do it correctly. There is only room for ramrod fuckheads who pump out code as fast as possible and then file 400 bugs on that same code after it's in production. And, yes, they knew of the problems when they pushed it to production. Everyone must move faster. Always. Faster tomorrow than today, without exception. The idea that people have the time to even imagine a day when they could CRAFT anything is long gone.

Profits above all. Always.

Speed, speed, speed. Always.

You must always go faster. If you are not going faster then you are a liability.

hakfoo 19 hours ago [-]
I'm trying to figure out ways to placate the boss by fitting AI into the edges of my workflow.

"Look at these commits and do a review" to find stupid stuff like forgotten exception throws or nulls. Or "Critique this documentation for a different audience"

I don't want to get into the "let the machine write the code" phase because that's the task I enjoy most, and it swaps my efforts into review, which I'm bluntly less confident about (having to follow and second-guess an architecture without being "there" when it was evolved increases the chances I'll miss stuff)

The stuff that AI demos well at-- "refactor 5,000 lines of code", "build a new client from ground level" are simply not what my team works on; we end up doing things where building a prompt to actually make the change we want-- and only the change we want-- takes longer than writing the code itself. 80% of the time is the debugging and planning.

dearilos 9 hours ago [-]
How has that worked out so far?

I'm building something similar and I found that code review with LLMs is really good when:

- You give it specific rules. I built a directory for these because they made the reviewer so much better [1]

- The rules you write are things your team already looks for during review (proper exception handling, ensuring documentation, proper comments, etc.)

[1] https://wispbit.com/rules

gt0 21 hours ago [-]
Not in the US, but I exaggerate my use of AI to the CEO because he has fully taken on board the idea that AI can do everything, except presumably his own job.

I do use AI, but I make out it's a bigger part of my workflow than it is to appease him.

dotcoma 23 hours ago [-]
It's a Dilbert world !
mcv 14 hours ago [-]
We recently had a poll at work about what we were using AI for, and there was only a single vote for "I'm not using any AI for ky work", and that was mine.

Maybe I should share this article in that chat; then I might seem less alone.

JohnFen 10 hours ago [-]
My employer is not forcing us to use these tools, but if they were, I'd totally just pretend.
18 hours ago [-]
gexla 18 hours ago [-]
Some use AI detection to be sure that you're not cheating. Others use AI detection to make sure you're doing your job. "This is terrible, bring it back to me when it's slop!"
more_corn 22 hours ago [-]
I just used ai to do three things that were a bit outside my skill and comfort zone. I’m pretty sure there are lots of people using it to good effect. Actually two things that were totally outside and one thing that is well within, but would have taken me eight hours and with me directing it took the ai about 12 min.

Most software engineers I know use some amount of ai assistance in coding.

Pretends is a pretty strong word here. A lot of people actually use it to help them do their work.

tw04 22 hours ago [-]
You’ve hit the nail on the head of why I think AI is counter productive.

In my experience, the place it’s most useful is an area you don’t have expertise in as a sort of bolster to your knowledge.

Also in my experience, it tends to be really good at producing outputs that sound extremely convincing to the non-expert that are completely incorrect in detail. And the only way to know if you got a good or bad answer is to be a subject matter expert… which sort of defeats the purpose.

sudahtigabulan 19 hours ago [-]
And, because of how first impressions work, this wrong info is what tends to stay in your memory.

Even if you "iterate", and eventually arrive at something that's correct, the thing that sticks is the first one, the one that you paid most attention to before you realized it's wrong.

ted_bunny 11 hours ago [-]
Not to mention, even if you reject all the nodes of the information it presents as incorrect, it can still smuggle in an ontology that you might forget to examine.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:52:40 GMT+0000 (Coordinated Universal Time) with Vercel.