Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "generative ai"
-
If you don't know how to explain about your software, but you want to be featured in Forbes (or other shitty sites) as quickly as possible, copy this:
I am proud that this software used high-tech technology and algorithms such as blockchain, AI (artificial intelligence), ANN (Artificial Neural Network), ML (machine learning), GAN (Generative Adversarial Network), CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), DNN (Deep Neural Network), TA (text analysis), Adversarial Training, Sentiment Analysis, Entity Analysis, Syntatic Analysis, Entity Sentiment Analysis, Factor Analysis, SSML (Speech Synthesis Markup Language), SMT (Statistical Machine Translation), RBMT (Rule Based Machine Translation), Knowledge Discovery System, Decision Support System, Computational Intelligence, Fuzzy Logic, GA (Genetic Algorithm), EA (Evolutinary Algorithm), and CNTK (Computational Network Toolkit).
🤣 🤣 🤣 🤣 🤣3 -
We need to update the slang "script kiddie" to "prompt enginot" or something.
So my boss's boss or someone even higher up drank the generative AI kool-aid and hired a 40-something kid to generate images for the marketing teams (or something like it).
Naturally, things soon went to shit.
The bloke already left, having staid less than six months on the job.
Guess who got to handle all the shit-is-currently-on-fire the kiddie left behind?
First impression: apparently, muggles tried to slak him some very broad descriptions of what they needed, and at first he actually tried to summarize those bark-speech pseudo-words into an actual prompt.
It does not seem to have gone for too long, though.
After users requested changes to the AI outputs, he would update the prompts, all right. And the process seemed to go fast enough... until reaching near-to-completion status.
Then users would request the tiniest changes to the AI output...
And the bloke couldn't do it.
Seriously. Some things were as simple as "we need this slider to go all the way up to 180% instead of 100%" on a lame dashboard and *kid. could. not. do. it.*.
In many cases he literally just gave up and copied the slak history into the AI prompt. No dice.
Bloke couldn't code a print('hello world') into a jupyter notebook cell, that's what i'm saying.
Apparently, he was "self taught", too. And was hired to "speed up the process of generating visual aids for usage in meetings and presentations". But then "the budget for this position was considered excessive" (meaning: shit results from a raw idea some executive crapped some day) and "the position was expanded to include the development of Business Inteligence Dashboards and Data Apps".
So now it is up to me (and my CRIMINALLY UNDERPAID team) to clean up his mess and maintain/fix/deprecate DOZENS of SHODDILY DESIGNED and MOSTLY USELESS but QUITE ACTIVE "data vis" PIECES OF SHIT.
Fuck "AI prompters", fucking snake oil script kiddies.7 -
My wife mentioned that any prompt to chatgpt was akin to leaving the faucet open for three whole minutes, in terms of water usage.
Thus she would stop using generative AI .
Love, I'm sorry but generative AI has taken over. Even DNS lookups are using AI (including generative).
It is literally impossible to use any device connected to the internet without the blessing of our generative AI overlords.
And since people hardly spend one single hour of their day (including sleeping time) without using an internet-conected device at least once, I would say we're all in the matrix since circa 2023.11 -
My reasoning is stupid, I just think it's cute in a pimp my ride kind of way. I heard you like getting colossally pounded in the fucking ass, so we put a virtual machine inside your compiler so you can use your binaries while you compile your binaries.
But there is a practical angle to it, too. It's state, structures and execution within the code itself -- that is, in a sense, generators "embedded" within the source, but without any kind of special syntax.
Rather, the code is all the same, and I'd have the option to make calls at compile time: the output of these calls could, in turn, be part of the resulting binary or processed by further calls.
It'd greenlight the wildest fuckery in the jungle, because *that* is the true and ultimate abstraction: programs that write other programs with minimal human intervention. But is my (still) theoretical, cheap ass two-dollar prototype approach held together with clown jizz and prayers better than the endless cumloads worth of corporate investment that's dumped and pumped into generative AI on a daily basis?
Well... **lights cigarette**
That's what we're about to find out, mother fuckers.1 -
Now that the whole generative AI debacle is finally dying down, I gotta ask the same question again:
WHY THE FUCK CORPORATIONS INSIST ON FALLIG FOR THE HYPE CYCLE EVERY FUCKING TIME?
I mean, I know why. It's because BigTech,Inc. always convinces companies like "Bob's tech wannabe car windows or something" to pay $$$ for this year's software fashion trends using arguments like "all the cool entrepreneurs are doing it! You don't wanna end up like those communist losers, do you?"
Then BigTech sells some shit that the muggles can't really afford (much less use), then shit hits the fan, then BigTech pretends that they never heard of it (hey, Blockchain IoT self-service BI wereable augmented reality 3D NFT eletric scooters from big data industry 4.0!) then the news cycle moves on. Rinse and repeat.
But, fuck, can't the muggles ever learn fucking ANYTHING? Tech industry is the fast fashion of industries. Do not try to imitate Facebook Google Apple Amazon, let them run their own course towards the cliff.
Instead, do your own thing.
Silicon Valley is not a good example for furniture companies to follow. So stop IKEA chatbots.12 -
With all this AI generative stuff I feel pretty ok with having done an exit from actual coding work some time back. (I know there is infinite work left but anyways…)
And…
YOU FUCKING IMBECIL FRONTENDERS. MAKE BACK WORK! PERSIST THE FUCKING VERTICAL SCROLL POSITION!!! MAKE WEB GREAT AGAIN!!! #MWGA2 -
Warehouse devs are trying to make our own homegrown warehouse robot AI to easy up the route optimization math, without paying up big $$$ for some big tech's crap.
Those robots look like wild "dire roombas", BTW. Each is large and round like bike tire on its side.
And the state of the art on the driving AI for those robots is... actually pretty good. It can avoid moving obstacles like humans or forklifts on their route or even drive around liquid puddles (our warehouses aren't exactly pristine).
So then came the time for the warehouse devs to benchmark their AI.
They compared it to a ready-to-use solution and fared quite well. Until someone suit decided they should ask chatgpt (or some other text AI crap) to try its "hand".
I've spent the best part of the day laughing my ass off, the devs had to go on a hunt to search for the *runaway robots driven by chatgpt*. One of them found its way to a freaking porta-potty like 50m outside the warehouse. Others were trying to lift forklifts to take those out of the way. Ooh, the irony.
A few were gladiators disputing the same pallet to lift. They were literally trying to sabotage each other to steal the pallet.
But most were just driving around randomly like giant roaches.
Man, sometimes generative AI can really make us laugh.4 -
We’re only random people living in random places, speaking random languages, eating random food, sleeping, studying and working random hours. Traveling to random points on a sphere.
Just random range is different.
Just random stuff happens on crossroads of two random dots and the entropy speed ups or slows down.
Nothing special at all.
Just a finite state machine iteration.
I mean the amount of effort we put into explanation of infinity is outstanding.
What if there is no infinity at all ?
What if infinity is just misunderstanding of our interpretation of the world around us. It’s just pixels, resolution, gaussian splatting, quantum state, you name it.
Hey man the world is flat. Just put it to the 2d space. How many space you need from a simulation perspective where your patient eyes can only see up to certain amount of light particles per second on a shitty lens.
Propose a world optimization techniques by slowing down subject perception, tiredness introduced. Compress memory, sleep introduced. Limit neurons, cpu power assigned. Deploy on cloud - put it to life. Exit 0 body failure. Exit 1 suicide. Kill -9 killed by tty from ip EARTH.X.Y
What you can do to make the world around this planet alive? Make it blink.
We developers are lazy and I believe that nature is even more lazy than us.
You think you’re going to elevator right now ? You’re going to the preloader. Looking at the window equals playing video from playback. Never goes live, just precomputed fsm. Cars, trains, airplains ? Preloaders everywhere. Highways to split traffic to cities and communication. The road and cities planning department is a matrix maintenance department. And don’t get me started about space.
Space is empty because it’s not even finished. So they put it all behind glass called milky way. You know how glass looked 500 years ago ? It was milky so it’s milky way so we don’t see shit.
If the space would be finished I’ll be starting writing this text from mars, finished it and sent from earth but no it’s light years guys, light years is not a second for a matter. Light year is a second of the the injected thoughts exchange only. Thoughts of the global computer called generative AI that they introduced on local computing devices called cloud.
Even the preloader system is not present, they left us with the one map and overpopulated demo. What a shit hole.I bet they’re increasing temperature right now to erase this alpha build and cash out. Obviously so many bugs here that his one can’t be fixed anymore. To many viruses.
Hope for 0days to start happening so we can escape using time travel or something.
I bet they cut a budget or something, moved the team to other projects. Or even worse solar system team got layoff off because we are just neurons that ordered to do it. And now we’re stuck in some maintenance mode, no new physics no new thoughts to pursue, just slow degeneration. I would pay more for the next run and switch to other galaxy far far away where they at lest have more modern light speed technology.
What do you think about it Trinity ? Not even worth wasting your time for that. No white rabbit this time.
I do not recommend this game at this stage of early access.
- only one available map despite promises for expansions over the years no single dlc arrived,
- missing space adventures
- no galaxy travel mode only a teaser trailers of what you can do in other “universes”
- developers don’t respond to complains
- despite diversity of species and buildings at first sight world looks to generic
- instead of new features bots with mind manipulation, AB testing and data harvesting was introduced
- death anti cheat mode installed1 -
Can't seem to understand Graphic Designers and or people who constantly cry about generative AI being "not art".
Why are they so angry?14 -
The Turing Test, a concept introduced by Alan Turing in 1950, has been a foundation concept for evaluating a machine's ability to exhibit human-like intelligence. But as we edge closer to the singularity—the point where artificial intelligence surpasses human intelligence—a new, perhaps unsettling question comes to the fore: Are we humans ready for the Turing Test's inverse? Unlike Turing's original proposition where machines strive to become indistinguishable from humans, the Inverse Turing Test ponders whether the complex, multi-dimensional realities generated by AI can be rendered palatable or even comprehensible to human cognition. This discourse goes beyond mere philosophical debate; it directly impacts the future trajectory of human-machine symbiosis.
Artificial intelligence has been advancing at an exponential pace, far outstripping Moore's Law. From Generative Adversarial Networks (GANs) that create life-like images to quantum computing that solve problems unfathomable to classical computers, the AI universe is a sprawling expanse of complexity. What's more compelling is that these machine-constructed worlds aren't confined to academic circles. They permeate every facet of our lives—be it medicine, finance, or even social dynamics. And so, an existential conundrum arises: Will there come a point where these AI-created outputs become so labyrinthine that they are beyond the cognitive reach of the average human?
The Human-AI Cognitive Disconnection
As we look closer into the interplay between humans and AI-created realities, the phenomenon of cognitive disconnection becomes increasingly salient, perhaps even a bit uncomfortable. This disconnection is not confined to esoteric, high-level computational processes; it's pervasive in our everyday life. Take, for instance, the experience of driving a car. Most people can operate a vehicle without understanding the intricacies of its internal combustion engine, transmission mechanics, or even its embedded software. Similarly, when boarding an airplane, passengers trust that they'll arrive at their destination safely, yet most have little to no understanding of aerodynamics, jet propulsion, or air traffic control systems. In both scenarios, individuals navigate a reality facilitated by complex systems they don't fully understand. Simply put, we just enjoy the ride.
However, this is emblematic of a larger issue—the uncritical trust we place in machines and algorithms, often without understanding the implications or mechanics. Imagine if, in the future, these systems become exponentially more complex, driven by AI algorithms that even experts struggle to comprehend. Where does that leave the average individual? In such a future, not only are we passengers in cars or planes, but we also become passengers in a reality steered by artificial intelligence—a reality we may neither fully grasp nor control. This raises serious questions about agency, autonomy, and oversight, especially as AI technologies continue to weave themselves into the fabric of our existence.
The Illusion of Reality
To adequately explore the intricate issue of human-AI cognitive disconnection, let's journey through the corridors of metaphysics and epistemology, where the concept of reality itself is under scrutiny. Humans have always been limited by their biological faculties—our senses can only perceive a sliver of the electromagnetic spectrum, our ears can hear only a fraction of the vibrations in the air, and our cognitive powers are constrained by the limitations of our neural architecture. In this context, what we term "reality" is in essence a constructed narrative, meticulously assembled by our senses and brain as a way to make sense of the world around us. Philosophers have argued that our perception of reality is akin to a "user interface," evolved to guide us through the complexities of the world, rather than to reveal its ultimate nature. But now, we find ourselves in a new (contrived) techno-reality.
Artificial intelligence brings forth the potential for a new layer of reality, one that is stitched together not by biological neurons but by algorithms and silicon chips. As AI starts to create complex simulations, predictive models, or even whole virtual worlds, one has to ask: Are these AI-constructed realities an extension of the "grand illusion" that we're already living in? Or do they represent a departure, an entirely new plane of existence that demands its own set of sensory and cognitive tools for comprehension? The metaphorical veil between humans and the universe has historically been made of biological fabric, so to speak.7