Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "performance hardware"
-
https://git.kernel.org/…/ke…/... sure some of you are working on the patches already, if you are then lets connect cause, I am an ardent researcher for the same as of now.
So here it goes:
As soon as kernel page table isolation(KPTI) bug will be out of embargo, Whatsapp and FB will be flooded with over-night kernel "shikhuritee" experts who will share shitty advices non-stop.
1. The bug under embargo is a side channel attack, which exploits the fact that Intel chips come with speculative execution without proper isolation between user pages and kernel pages. Therefore, with careful scheduling and timing attack will reveal some information from kernel pages, while the code is running in user mode.
In easy terms, if you have a VPS, another person with VPS on same physical server may read memory being used by your VPS, which will result in unwanted data leakage. To make the matter worse, a malicious JS from innocent looking webpage might be (might be, because JS does not provide language constructs for such fine grained control; atleast none that I know as of now) able to read kernel pages, and pawn you real hard, real bad.
2. The bug comes from too much reliance on Tomasulo's algorithm for out-of-order instruction scheduling. It is not yet clear whether the bug can be fixed with a microcode update (and if not, Intel has to fix this in silicon itself). As far as I can dig, there is nothing that hints that this bug is fixable in microcode, which makes the matter much worse. Also according to my understanding a microcode update will be too trivial to fix this kind of a hardware bug.
3. A software-only remedy is possible, and that is being implemented by all major OSs (including our lovely Linux) in kernel space. The patch forces Translation Lookaside Buffer to flush if a context switch happens during a syscall (this is what I understand as of now). The benchmarks are suggesting that slowdown will be somewhere between 5%(best case)-30%(worst case).
4. Regarding point 3, syscalls don't matter much. Only thing that matters is how many times syscalls are called. For example, if you are using read() or write() on 8MB buffers, you won't have too much slowdown; but if you are calling same syscalls once per byte, a heavy performance penalty is guaranteed. All processes are which are I/O heavy are going to suffer (hostings and databases are two common examples).
5. The patch can be disabled in Linux by passing argument to kernel during boot; however it is not advised for pretty much obvious reasons.
6. For gamers: this is not going to affect games (because those are not I/O heavy)
Meltdown: "Meltdown" targeted on desktop chips can read kernel memory from L1D cache, Intel is only affected with this variant. Works on only Intel.
Spectre: Spectre is a hardware vulnerability with implementations of branch prediction that affects modern microprocessors with speculative execution, by allowing malicious processes access to the contents of other programs mapped memory. Works on all chips including Intel/ARM/AMD.
For updates refer the kernel tree: https://git.kernel.org/…/ke…/...
For further details and more chit-chats refer: https://lwn.net/SubscriberLink/...
~Cheers~
(Originally written by Adhokshaj Mishra, edited by me. )22 -
Am I the only one who hates it that everything needs to be done in JavaScript nowadays?
Why can't you just start writing native software again? Why does every program need its own fucking browser engine and at least 200MB of RAM to do nothing but show and edit text?
I want to have fast and streamlined software again and use my resources for important things. So much software that is called fast or lightweight isn't either. It's just a little less heavy and slow than the software it tries to replace.
I don't use C all the time, but maybe looking into Qt instead of electron might be a start.
I had a project where I could convince my tutors to let me use C++ instead of JS and they were surprised how fast my application started even though it only consisted only of a empty window with a status bar. How far have we come that we even need to think about performance when opening an empty window on modern hardware?20 -
3 rants for the price of 1, isn't that a great deal!
1. HP, you braindead fucking morons!!!
So recently I disassembled this HP laptop of mine to unfuck it at the hardware level. Some issues with the hinge that I had to solve. So I had to disassemble not only the bottom of the laptop but also the display panel itself. Turns out that HP - being the certified enganeers they are - made the following fuckups, with probably many more that I didn't even notice yet.
- They used fucking glue to ensure that the bottom of the display frame stays connected to the panel. Cheap solution to what should've been "MAKE A FUCKING DECENT FRAME?!" but a royal pain in the ass to disassemble. Luckily I was careful and didn't damage the panel, but the chance of that happening was most certainly nonzero.
- They connected the ribbon cables for the keyboard in such a way that you have to reach all the way into the spacing between the keyboard and the motherboard to connect the bloody things. And some extra spacing on the ribbon cables to enable servicing with some room for actually connecting the bloody things easily.. as Carlos Mantos would say it - M-m-M, nonoNO!!!
- Oh and let's not forget an old flaw that I noticed ages ago in this turd. The CPU goes straight to 70°C during boot-up but turning on the fan.. again, M-m-M, nonoNO!!! Let's just get the bloody thing to overheat, freeze completely and force the user to power cycle the machine, right? That's gonna be a great way to make them satisfied, RIGHT?! NO MOTHERFUCKERS, AND I WILL DISCONNECT THE DATA LINES OF THIS FUCKING THING TO MAKE IT SPIN ALL THE TIME, AS IT SHOULD!!! Certified fucking braindead abominations of engineers!!!
Oh and not only that, this laptop is outperformed by a Raspberry Pi 3B in performance, thermals, price and product quality.. A FUCKING SINGLE BOARD COMPUTER!!! Isn't that a great joke. Someone here mentioned earlier that HP and Acer seem to have been competing for a long time to make the shittiest products possible, and boy they fucking do. If there's anything that makes both of those shitcompanies remarkable, that'd be it.
2. If I want to conduct a pentest, I don't want to have to relearn the bloody tool!
Recently I did a Burp Suite test to see how the devRant web app logs in, but due to my Burp Suite being the community edition, I couldn't save it. Fucking amazing, thanks PortSwigger! And I couldn't recreate the results anymore due to what I think is a change in the web app. But I'll get back to that later.
So I fired up bettercap (which works at lower network layers and can conduct ARP poisoning and DNS cache poisoning) with the intent to ARP poison my phone and get the results straight from the devRant Android app. I haven't used this tool since around 2017 due to the fact that I kinda lost interest in offensive security. When I fired it up again a few days ago in my PTbox (which is a VM somewhere else on the network) and today again in my newly recovered HP laptop, I noticed that both hosts now have an updated version of bettercap, in which the options completely changed. It's now got different command-line switches and some interactive mode. Needless to say, I have no idea how to use this bloody thing anymore and don't feel like learning it all over again for a single test. Maybe this is why users often dislike changes to the UI, and why some sysadmins refrain from updating their servers? When you have users of any kind, you should at all times honor their installations, give them time to change their individual configurations - tell them that they should! - in other words give them a grace time, and allow for backwards compatibility for as long as feasible.
3. devRant web app!!
As mentioned earlier I tried to scrape the web app's login flow with Burp Suite but every time that I try to log in with its proxy enabled, it doesn't open the login form but instead just makes a GET request to /feed/top/month?login=1 without ever allowing me to actually log in. This happens in both Chromium and Firefox, in Windows and Arch Linux. Clearly this is a change to the web app, and a very undesirable one. Especially considering that the login flow for the API isn't documented anywhere as far as I know.
So, can this update to the web app be rolled back, merged back to an older version of that login flow or can I at least know how I'm supposed to log in to this API in order to be able to start developing my own client?6 -
"Thank you for choosing Microsoft!"
No Microsoft, I really didn't choose you. This crappy hardware made you the inevitable, not a choice.
And like hell do I want to run your crappy shit OS. I tried to reset my PC, got all my programs removed (because that's obviously where the errors are, not the OS, right? Certified motherfuckers). Yet the shit still didn't get resolved even after a reset. Installing Windows freshly again, because "I chose this".
Give me a break, Microshaft. If it wasn't for your crappy OS, I would've gone to sleep hours ago. Yet me disabling your shitty telemetry brought this shit upon me, by disabling me to get Insider updates just because I added a registry key and disabled a service. Just how much are you going to force data collection out of your "nothing to hide, nothing to fear" users, Microsoft?
Honestly, at this point I think that Microsoft under Ballmer might've been better. Because while Linux was apparently cancer back then, at least this shitty data collection for "a free OS" wasn't yet a thing back then.
My mother still runs Vista, an OS that has since a few months ago reached EOL. Last time she visited me I recommended her to switch to Windows 7, because it looks the same but is better in terms of performance and is still supported. She refused, because it might damage her configurations. Granted, that's probably full of malware but at this point I'm glad she did.
Even Windows 7 has telemetry forcibly enabled at this point. Vista may be unsupported, but at least it didn't fall victim to the current status quo - data mining on every Microshaft OS that's still supported.
Microsoft may have been shady ever since they pursued manufacturers into defaulting to their OS, and GPU manufacturers will probably also have been lobbied into supporting Windows exclusively. But this data mining shit? Not even the Ballmer era was as horrible as this. My mother may not realize it, but she unknowingly avoided it.6 -
I hate that trend of making things more lax in terms of implementation quality while writing it off with a simple but stupid "oh computers are faster now, users have the RAM, yadda yadda". Yeah but back in a day things were actually running pretty damn fast in comparison while doing it on hardware that is totally potato in comparison to what's used now. This trend eats away ANY gains we get in terms of performance with upgrades. It deprecated the whole notion of netbooks (and I kinda liked them for casual stuff), since now every goddamn one-page blog costs you from several megabytes and up to tens of megabytes of JS alone and lots of unnecessary computations. Like dude, you've brought in a whole Angular to render some text and three buttons, and now your crappy blog is chewing on 500 MB of my RAM for whatever reason.
Also, Electron apps. Hate them. Whoever invented the concept, deserves their own warm spot in Hell. You're doing the same you would've done more efficiently in Qt or whatever there is. Qt actually takes care of a lot of stuff for you, so it doesn't look like you'll be slowed down by choosing it over Electron. Like yeah, web version will share some code with your desktop solution but you're the whole reason I'm considering your competitor's lack of Electron a huge advantage over you even if they lack in features.
Same can be said pretty much about everything that tries to be more than it should, really. IDEs, for example, are cancerous. You can do 90%+ of what you intended to do in IDE using plain Vim with *zero* plugins, and it will also result in less strain on your hands.
People have just unlearned the concept of conscious consumption, it seems.28 -
I've recently received another invitation to Google's Foobar challenges.
A while ago someone here on devRant (which I believe works at Google, and whose support I deeply appreciate) sent me a couple of links to it too. Unfortunately back then I didn't take the time to learn the programming languages (Python or Java) that Google requires for these challenges. This time I'm putting everything on Python, as it's the easiest language to learn when coming from Bash.
But at the end of the day.. I am a sysadmin, not a developer. I don't know a single thing about either of these languages. Yet I can't take these challenges as the sysadmin I am. Instead, I have to learn a new language which chances are I'll never need again outside of some HR dickhead's interview with lateral thinking questions and whiteboard programming, probably prohibited from using Google search like every sane programmer and/or sysadmin would for practical challenges that actually occur in real life.
I don't want to do that. Google is a once in a lifetime opportunity, I get that. Many people would probably even steal that foobar link from me if they could. But I don't think that for me it's the right thing to do. Google has made a serious difference by actually challenging developers with practical scenarios, and that's vastly superior to whatever a HR person at any other company could cobble together for an interview. But there's one thing that they don't seem to realize. A company like Google consists of more than just developers. Not only that, it probably consists - even within their developer circles - of more than just Python and Java developers. If any company would know about languages that are more optimized such as C, it would be Google that has to leverage this performance in order to be able to deliver their services.
I'll be frank here. Foobar has its own issues that I don't like. But if Google were a nice company, I'd go for it all the way nonetheless - after all, they are arguably the single biggest tech company in the world, and the tech industry itself is one of the biggest ones in the world nowadays. It's safe to say that there's likely no opportunity like working at Google. But I don't think it's the right thing. Even if I did know Python or Java... Even if I did. I don't like Google's business decisions.
I've recently flashed my OnePlus 6T with LineageOS. It's now completely Google-free, except for a stock Yalp account (that I'm too afraid to replace with my actual Google account because oh dear, third-party app stores, oh dear that could damage our business and has to be made highly illegal!1!). My contacts on that phone are are all gone. They're all stored on a Google server somewhere (except for some like @linuxxx' that I consciously stored on device storage and thus lost a while back), waiting for me to log back in and sync them back. I've never asked for this. If Google explicitly told me that they'd sync all my contacts to my Google account and offer feasible alternatives, I'd probably given more priority to building a CalDAV and CardDAV server of my own. Because I do have the skills and desire to maintain that myself. I don't want Google to do this for me.
Move fast and break things. I've even got a special Termux script on my home screen, aptly named Unfuck-Google-Play. Every other day I have to use it. Google Search. When I open it on my Nexus 6P, which was Google's foray into hardware and in which they failed quite spectacularly - I've even almost bent and killed it tonight, after cursing at that piece of shit every goddamn day - the Google app opens, I type some text into it.. and then it just jumps back to the beginning of whatever I was typing. A preloader of sorts. The app is a fucking web page parser, or heck probably even just an API parser. How does that in any way justify such shitty preloaders? How does that in any way justify such crappy performance on anything but the most recent flagships? I could go on about this all day... I used to run modern Linux on a 15 year old laptop, smoothly. So don't you Google tell me that a - probably trillion dollar - company can't do that shit right. When there's (commercialized) community projects like DuckDuckGo that do things a million times better than you do - yet they can't compete with you due to your shit being preloaded on every phone and tablet and impossible to remove without rooting - that you Google can't do that and a lot more. You've got fucking Google Assistant for fucks sake! Yet you can't make a decent search app - the goddamn thing that your company started with in the first place!?
I'm sorry. I'd love to work at Google and taste the diversity that this company has to offer. But there's *a lot* wrong with it at the business end too. That is something that - in that state - I don't think I want to contribute to, despite it being pretty much a lottery ticket that I've been fortunate enough to draw twice.
Maybe I should just start my own company.6 -
A colleague of mine had to debug performance problems in a foreign, proprietary application that is ancient.
To be crystal clear: Only reason that thing exists is because some old geezers fear change.
Asked me for help cause it's an _ancient_ MS SQL server that is luckily running on hardware owned by us.
Finding the credentials was already a funny task.
We had to access the vault (not joking here, we have a physical vault for storing sensitive data and critical backups), grab a folder and find the necessary data cause no one ever dares to touch that thing.
The application is btw for a sort of ERP / inventory system that is used in some ancient shops not yet migrated...
Yeah. Story speaks for itself.
Anyway, after dusting off ourselves, we were able to connect.
Was a bit ... Interesting. Everything's in german. The worst kind of german.
After looking at the first tables, I started giggling.
My colleague knew immediately that this was a sign of danger (insert Simpson meme here), raised his eyebrows and asked "How bad is it....".
Me, still giggling, "lemme take a further look, this is gold".
*long sigh from the colleague*
Well... It ended with me putting my hands in front of my eyes, turning around and saying: "I cannot look at it anymore, it hurts too much...."
To summarize:
- German table names
- When a table exceeded 300 plus columns, they added another table with the same plus suffix "_ddd"… where ddd is an zero filled integer sequence like 001
- To join this mess, they created views... Named "generator" - Sequence Number ... Some had the beginning of table names appended, which doesn't make it less confusing.
- the process list was listing queries running longer than 5 mins.
Which isn't at all surprising when generating carrtesian products of N tables with left join.
I've seen shit.... I've seen a lot of shit.
But that shit scared me.1 -
I miss the good times when the web was lightweight and efficient.
I miss the times when essential website content was immediately delivered as HTML through the first HTTP request.
I miss the times when I could open a twitter URL and have the tweet text appear on screen in two seconds rather than a useless splash screen followed by some loading spinners.
I miss the times when I could open a YouTube watch page and see the title and description on screen in two seconds rather than in ten.
I miss the times when YouTube comments were readily loaded rather than only starting to load when I scroll down.
JavaScript was lightweight and used for its intended purpose, to enhance the experience by loading content at the page bottom and by allowing interaction such as posting comments without having to reload the entire page, for example.
Now pretty much all popular websites are bloated with heavy JavaScript. Your browser needs to walk through millions of bytes of JavaScript code just to show a tweet worth 200 bytes of text.
The watch page of YouTube (known as "polymer", used since 2017) loads more than eight megabytes of JavaScript last time I checked. In 2012, it was one to two hundred kilobytes of HTML and at most a few hundred kilobytes of JavaScript, mostly for the HTML5 player.
And if one little error dares to occur on a JavaScript-based page, you get a blank page of nothingness.
Sure, computers are more powerful than they used to be. But that does not mean we should deliberately make our new software and website slower and more bloated.
"Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster."
Source: https://en.wikipedia.org/wiki/...
A presentation by Jake Archibald from 2015, but more valid than ever: https://youtube.com/watch/...34 -
Kotlin
All the languages have a basic objective in mind that shapes both the language and it's community:
for c/c++ was low level hardware access and performance, for Java OOP and learning; Kotlin was mostly made to make dev life easier and tries to anticipate what you want to do instead of forcing his patterns and tries to help you instead of punishing errors.
As a dev at least i feel a little more cared about and less left alone (especially in the ugly world of Java for Android)14 -
Best code performance incr. I made?
Many, many years ago our scaling strategy was to throw hardware at performance problems. Hardware consisted of dedicated web server and backing SQL server box, so each site instance had two servers (and data replication processes in place)
Two servers turned into 4, 4 to 8, 8 to around 16 (don't remember exactly what we ended up with). With Window's server and SQL Server licenses getting into the hundreds of thousands of dollars, the 'powers-that-be' were becoming very concerned with our IT budget. With our IT-VP and other web mgrs being hardware-centric, they simply shrugged and told the company that's just the way it is.
Taking it upon myself, started looking into utilizing web services, caching data (Microsoft's Velocity at the time), and a service that returned product data, the bottleneck for most of the performance issues. Description, price, simple stuff. Testing the scaling with our dev environment, single web server and single backing sql server, the service was able to handle 10x the traffic with much better performance.
Since the majority of the IT mgmt were hardware centric, they blew off the results saying my tests were contrived and my solution wouldn't work in 'the real world'. Not 100% wrong, I had no idea what would happen when real traffic would hit the site.
With our other hardware guys concerned the web hardware budget was tearing into everything else, they helped convince the 'powers-that-be' to give my idea a shot.
Fast forward a couple of months (lots of web code changes), early one morning we started slowly turning on the new framework (3 load balanced web service servers, 3 web servers, one sql server). 5 minutes...no issues, 10 minutes...no issues,an hour...everything is looking great. Then (A is a network admin)...
A: "Umm...guys...hardly any of the other web servers are being hit. The new servers are handling almost 100% of the traffic."
VP: "That can't be right. Something must be wrong with the load balancers. Rollback!"
A:"No, everything is fine. Load balancer is working and the performance spikes are coming from the old servers, not the new ones. Wow!, this is awesome!"
<Web manager 'Stacey'>
Stacey: "We probably still need to rollback. We'll need to do a full analysis to why the performance improved and apply it the current hardware setup."
A: "Page load times are now under 100 milliseconds from almost 3 seconds. Lets not rollback and see what happens."
Stacey:"I don't know, customers aren't used to such fast load times. They'll think something is wrong and go to a competitor. Rollback."
VP: "Agreed. We don't why this so fast. We'll need to replicate what is going on to the current architecture. Good try guys."
<later that day>
VP: "We've received hundreds of emails complementing us on the web site performance this morning and upset that the site suddenly slowed down again. CEO got wind of these emails and instructed us to move forward with the new framework."
After full implementation, we were able to scale back to only a few web servers and a single sql server, saving an initial $300,000 and a potential future savings of over $500,000. Budget analysis considering other factors, over the next 7 years, this would save the company over a million dollars.
At the semi-annual company wide meeting, our VP made a speech.
VP: "I'd like to thank everyone for this hard fought journey to get our web site up to industry standards for the benefit of our customers and stakeholders. Most of all, I'd like to thank Stacey for all her effort in designing and implementation of the scaling solution. Great job Stacy!"
<hands her a blank white envelope, hmmm...wonder what was in it?>
A few devs who sat in front of me turn around, network guys to the right, all look at me with puzzled looks with one mouth-ing "WTF?"9 -
Game Streaming is an absolute waste.
I'm glad to see that quite a lot of people are rightfully skeptical or downright opposed to it. But that didn't stop the major AAA game publishers announcing their own game streaming platforms at E3 this weekend, did it?
I fail to see any unique benefit that can't be solved with traditional hardware (either console or PC)
- Portability? The Nintendo Switch proved that dedicated consoles now have enough power to run great games both at home and on the go.
- Storage? You can get sizable microSD cards for pretty cheap nowadays. So much so that the Switch went back to use flash-based cartridges!
- Library size/price? The problem is even though you're paying a low price for hundreds of games, you don't own them. If any of these companies shut down the platform, all that money you spent is wasted. Plus, this can be solved with backwards compatibility and one-time digital downloads.
- Performance on commodity hardware? This is about the only thing these streaming services have going for it. But unfortunately this only works when you have an Internet connection, so if you have crap Internet or drop off the network, you're screwed. And has it ever occurred to people that maybe playing Doom on your phone is a terrible UX experience and shouldn't be done because it wasn't designed for it?
I just don't get it. Hopefully this whole fad passes soon.19 -
1) Built an entire SoC around a MIPS CPU. Fixed bugs in the CPU. Created hardware, busses, firmware, wrote Linux drivers, ported Linux.
2) Still working on a C++ abstraction framework for heterogenous computations for 4 years. About to solve / create a prototype for GPGPU and maybe even HDL code generation. Utilizes dynamic dispatch for scalar, SSE, AVX and other targets. I started this only because I did not like the performance of procedural noise algorithms utilized in a game prototype I started in 2015.
3) Created a game in 5 months to drag myself out of depression. Feeling success while your job sucks is soooo goooodd...13 -
Was forced to do some work on Windows this week (CAD tools that runs only on Windows). I spent a few days just setting up the tools. There were quite a few things I realized I forgot about Windows (as compared to Linux).
1) Installation times are down right horrific. What exactly are the installer doing for 10 minutes?
2) .NET is a cluster fuck. Not even Microsofts repair tool can fix it, but rather just hangs. I ended up using another tool to nuke it and reinstall.
3) Windows binary installs are insanely huge, thus, takes forever to download.
4) The registry is a pointless database that must have been written in hell with the single intent of destroying users will to live. The sole existence of the registry is another proof that completely incompetent engineers designed Windows.
5) Rebooting is the only way to solve many problems. This is another sure sign of a fundamentally fucked up OS design.
6) What the heck is wrong with the GUIs designers? The control panel must be the worst design ever. There are so many levels to get to a particular setting I'm getting dizzy. Nothing gets better by the illogical organisation.
7) Windows networking. A perversion of the tcp/ip stack that makes it virtually impossible to understand a damn thing about the current network configuration. There are at least 3 different places that effects the settings.
8) Windows command prompt. Why did they even bother to leave it in? The interpreter is as intelligent as retarded donut. You can't do anything with it, except typing "exit" and Google for another solution.
8) Updates. Why does it takes hundreds of updates per month to keep that thing safe?
9) Despite all updates that is flying out of Redmond like confetti, it is still necessary to install antivirus to keep the damn thing safe. That cost extra money, and further cost you by degrading performance of your hardware.
10) Window performance. Software runs like it was swimming in molasses. The final stab in the back on your hardware investment, and pretty much sends performance on your hardware back a few hundred bucks more.
11) Closed source is evil. If something crash consistently, you might find a forum that address the issues you have. Otherwise you're out of luck. On the other hand, it might be for the better. I imagine reading the code for Windows can lead to severe depression.
I'm lucky to be a Linux dev, and should probably not complain too much... But really, Windows, go get yourself hit by a truck and die. I won't miss you.14 -
Worst thing you've seen another dev do? Here is another.
Early into our eCommerce venture, we experienced the normal growing pains.
Part of the learning process was realizing in web development, you should only access data resources on an as-needed basis.
One business object on it's creation would populate db lookups, initialize business rule engines (calling the db), etc.
Initially, this design was fine, no one noticed anything until business started to grow and started to cause problems in other systems (classic scaling problems)
VP wanted a review of the code and recommendations before throwing hardware at the problem (which they already started to do).
Over a month, I started making some aggressive changes by streamlining SQL, moving initialization, and refactoring like a mad man.
Over all page loads were not really affected, but the back-end resources were almost back to pre-eCommerce levels.
The main web developer at the time was not amused and fought my changes as much as she could.
Couple months later the CEO was speaking to everyone about his experience at a trade show when another CEO was complementing him on the changes to our web site.
The site was must faster, pages loaded without any glitches, checkout actually worked the first time, etc.
CEO wanted to thank everyone involved etc..and so on.
About a week later the VP handed out 'Thank You' certificates for the entire web team (only 4 at the time, I was on another team). I was noticeably excluded (not that I cared about a stupid piece of paper, but they also got a pizza lunch...I was much more pissed about that). My boss went to find out what was going on.
MyBoss: "Well, turned out 'Sally' did make all the web site performance improvements."
Me: "Where have you been the past 3 months? 'Sally' is the one who fought all my improvements. All my improvements are still in the production code."
MyBoss: "I'm just the messenger. What would you like me to do? I can buy you a pizza if you want. The team already reviewed the code and they are the ones who gave her the credit."
Me: "That's crap. My comments are all over that code base. I put my initials, date, what I did, why, and what was improved. I put the actual performance improvement numbers in the code!"
MyBoss: "Yea? Weird. That is what 'Tom' said why 'Sally' was put in for a promotion. For her due diligence for documenting the improvements."
Me:"What!? No. Look...lets look at the code"
Open up the file...there it was...*her* initials...the date, what changed, performance improvement numbers, etc.
WTF!
I opened version control and saw that she made one change, the day *after* the CEO thanked everyone and replaced my initials with hers.
She knew the other devs would only look at the current code to see who made the improvements (not bother to look at the code-differences)
MyBoss: "Wow...that's dirty. Best to move on and forget about it. Let them have their little party. Let us grown ups keeping doing the important things."8 -
So, first time ranting, sorry if I mess anything up.
When I first started my current job and got introduced to the system we were coding in, something seemed a little fishy to me. Didn't like the system anyway, but at least the language is a compiler language, so it runs quite quickly, right?
In theory, yeah. If the lead dev liked the IDE that came with it. But he has to REALLY fucking hate it, because rather than using it, he codes in plaintext. No syntax highlighting, no auto-indent, nothing. And he's built the entire damn system around doing that. Sadly the compiler is only integrated into the IDE, so what do we do there? Copy the code from the plaintext file to the IDE to compile it there? No no, why would you. The language has a function you can use to compile some code at runtime.
And so he does. Every. Single. Fucking. Script. There's a single main script that runs and finds the correct textfile to then runtime-compile and execute. So we effectively made a compiler language into a massively unoptimized interpreter lang.
I even mentioned that this might be a problem, but I was completely dismissed, so at that point it's not my problem anymore and I have then switched to a different system anyway.
Couple weeks later I heard the same guy complaining that the scripts were running almost the whole night so we'd probably need some better hardware or something.
Well if only there was a really obvious solution that would improve the performance by probably about a factor of 20 or so...13 -
When throwing more hardware at a problem is always the chosen answer rather than fixing the performance issues...4
-
I'm convinced this is going to be wildly unpopular, but hey...
Please stop writing stuff in C! Aside from a few niche areas (performance-critical, embedded, legacy etc. workloads) there's really no reason to other than some fumbled reason about "having full control over the hardware" and "not trusting these modern frameworks." I get it, it's what we all grew up with being the de-facto standard, but times have moved on, and the number of massive memory leaks & security holes that keep coming to light in *popular*, well-tested software is a great reason why you shouldn't think you're smart enough to avoid all those issues by taking full control yourself.
Especially, if like most C developers I've come across, you also shun things like unit tests as "something the QA department should worry about" 😬12 -
Sometimes I just don't know what to say anymore
I'm working on my engine and I really wanna push high triangle counts. I'm doing a pretty cool technique called visibility rendering and it's great because it kind of balances out some known causes of bad performance on GPUs (namely that pixels are always rasterized in quads, which is especially bad for small triangles)
So then I come across this post https://tellusim.com/compute-raster... which shows some fantastic results and just for the fun of it I implement it. Like not optimized or anything just a quick and dirty toy demo to see what sort of performance I can get
... I just don't know what to say. Using actual hardware accelerated rasterization, which GPUs are literally designed to be good at, I render about 37 million triangles in 3.6 ms. Eh, fine but not great. Then I implement this guys unoptimized(!) software rasterizer and I render the same scene in 0.5 ms?!
IT'S LITERALLY A COMPUTE SHADER. I rasterize the triangles manually IN SOFTWARE and write them out with 64-bit atomic image stores. HOW IS THIS FASTER THAN ACTUAL HARDWARE!???
AND BY LIKE A ORDER OF MAGNITUDE AT THAT???
Like I even tried doing some optimizations like backface cone culling on the meshlets, but doing that makes it slower. HOW. Im rendering 37 million triangles without ANY fancy tricks. No hi-z depth culling which a GPU would normally do. No backface culling which a GPU with normally do. Not even damn clipping of triangles. I render ALL of them ALL the time. At 0.5 ms7 -
Today, I say farewell to a piece of software that has shared my professional uprising as a dev, today I let go off an old friend, today i uninstall chrome, after nearly 12 years of dedication, hard work and pain staking performance issues from time to time, you went from the child star that fixed what was wrong with browsers back in 2008, and became the abusive man child that crashes my system when I open you now, so enough with your bullshit.
Today I transfer my things to Edge(chromium) and say farewell old friend, there's only so many BSOD's you can cause just by launching a new tab without hardware acceleration before I can not stand the sight of you anymore.
I wish you a good and stable life, but your creators obviously couldn't give a fuck anymore about being the "light weight and fast" browser you once were.rant all good things come to an end chrome 11 years of freindship trading you in for a new model edge bye bye9 -
This will definitely trigger many but the truth regardless of how you feel is the greatest programmers are those who understand both the hardware level and software .. only then are you more than a dev or programmer.. you are an engineer...
I challenge the devs who dis believe to go out and learn to build circuits, write optimized, efficient bare metal code.: no sdk.. no api... no drivers ..remove the unneeded abstraction layers that have blinded you...build it yourself, expand your potential and understanding..
Not only will you become more valuable overall, but you will write better code as you are more conscious of performance and space and physics of the physical layer.
I’m not talking about Arduino or raspie
Those who stand strong that high level abstraction languages and use of third party apis is a sufficient sustainable platform of development are blind to reality.. the more people who only know those levels, the less people pushing the industry of the low level.., which is the foundation of everything in the industry.. without that low level software the high level abstractions and systems cannot run
Why did we have huge technology advancements from 70s to early 2000s.... because more people in our industry understood the hardware layer..: wrote the software at the less abstracted layers..
Yeah it takes longer todo things at that low level abstraction.. but good robust products that change the world and industry don’t take a few week or months to build.....
Take this with what you will... I’m just trying to open the eyes of the blind developers to the true nature and reality of our industry23 -
EDIT: !rant, forgot tags, sorry :(
Tfw your new laptop CPU gets the same performance as your 10y.o. desktop and is 10x more power efficient.
Ryzen 5 2500U (2018):
> 4c/8t
> 2GHz base / 3.6 turbo
> 15W TDP
Xeon W3520 (2009):
> 4c/8t
> 2.7GHz base / 2.9 turbo
>130W TDP
Running new hardware feels great.3 -
We had today a meeting in management that ended in a discussion about prevention instead of crisis and risk management.
Or to make it bit simpler: prevention instead of treatment.
In IT / management / government, treatment is usually the way - you let the crisis happen, despite knowing it could have been prevented, and treat the damage / crisis.
Needless to say, the discussion escalated like usual.
It's funny how managers are able to put sentences like: "it's important to have quality assurance like prevention but staying within budget should be priority" (loosely translated from German, it's hard - sorry)...
You mean the budget that exploded and quadrupled in size because you dumb fuckers pay no attention to quality assurance? Or the additional cost of hardware, maintenance etc. to compensate for the fuckups regarding performance evaluation and regression testing?
"We cannot prevent everything nor anticipate everything, it's safer to deal with an estimated risk than with the unknown"...
"But we'd need to invest in ..., which reduces value"
I could give more details, but I think the point is clear... the discussion became quite heated and the longer it went on, the more I wanted to have an morphine drop with suicide option...
Why do people hate prevention so much?
Is the concept that hard to understand? You prevent things to not deal with crisis.
You invest to prevent loss.
It's just one of these weeks where the only happiness consists of tipping the delivery guy with 20 % plus and getting an honest smile.
:(3 -
I've been a Macbook user for over a decade, after the initial disappointment of the 2016 MacBook Pro release I decided to move to a PC, against my better judgement I decided to buy a new Dell XPS 15, after reading all the reviews praising it's build quality and performance + it seems to have good hardware for Linux compatibility.
Soo much regret, I couldn't be more disappointed, it's such a piece of shit, I admit I probably got a bad egg, but dealing with Dell support is like pulling hairs from my testicle sack. If I have to pay an extra $500-$1000 on my next laptop for an "Apple Tax" to get a product that has been through proper quality control and has awsome customer service so be it, last time I try something new.
BTW I'm not a PC hater, I just wish more companies made high quality products.10 -
I will not miss you bitch. See screenshot. I received new hardware. I will use a laptop with good specs as server. My dad bought it from his previous employer because he went for retirement. It has an ultrabook-grade 11th gen processor and he only bought it for 350,- euro. His former employer was a school, they don't give a fuck about money like a commercial company would do in such case. It's originally bought with tax money anyway.
https://llm.molodetz.nl is currently online but not for long, i hope to have smth running at end of the weekend. Probably a 7b model. I have plans with it that require some performance so I won't use the heavy ones.
Retoor1b currently is 0.5b or 1.5b. I forgot. The models with lower parameter count are a bit more naive and trainable like a kid. They're also not very biased yet. So, that will be my main new challenge. How to make a chat bot unethically human. No political correctness under this roof.
Would be nice if i could make it a bit like bratgpt. Sounds like a joke, but that model is expensive as fuck. You'll be shocked. But i would like to implement some sarcasm in it. A bit unpredictable. But normally such configuration escalates into very weird behavior.
My 'server' has a freaking 4K screen and i'm working on a decade old laptop. But seriously, the keyboard of the new one sucks. Nothing beats a x270. * tik tik tik * rakketakketak *. My previous x270 missed four keys. The three x270's i had, all had familiar experience but still different. The other two would never lose a key I guess. I configured the new 'server' that it safes battery, configured for mostly on AC.
I'm living on limited amount of cash (and will work again when i will run out). That's why i normally don't spend money myself on such things. So i'm now very happy. Fuck, this was about to be rant about how much my AI sucks but it ended in happy stuff. Oh well...
If you're still reading, you're the best!
Edit:
Images uploading broke again. Here is link: https://devrant.molodetz.nl/llm.png9 -
If you call yourself a developer or consider yourself part of the IT world and at the same time complain about software updates (all those dumb windows update rants) then do one of the following:
1- write a multi -million line of code OS that runs on virtually unlimited number of PC configurations and hardware and get the code right the first time with no major bugs, no security vulnerabilities, and a consistently fast and stable performance.
Or...
2- stop calling yourself a dev, or part of the IT world.
If the last 2 options are unreasonable, unachievable, or unfathomable.... Then stop bitching about software updates.21 -
So I got my compute shader rasterizer working pretty well now which is great. I now also have a fallback to hardware rasterization for triangles which are a bit sussy (mostly just too large) and getting that implemented without tanking performance (gazillion threads hitting the same atomic variable at the same time) involved some tricky workgroup/subgroup hackery but I'm happy with it
Only problem... I have like 90%+ SM occupancy (which is great) but I also have 90%+ SM occupancy which means the nvidia drivers think I'm mining cryptocurrency and start bottlenecking my compute performance at random. It slowly goes up to 3x, then it slowly goes down again, then it slowly goes up again... argh
Thanks, miners 😐8 -
the one that exists (c#) seems underused compared to where it could (or even should) be used. and the place that uses it the most (enterprise) butchers and mangles its use, just as enterprise tends to do with everything.
the one that i'm designing... the fact that it doesn't exist yet, and that even as i'm zeroing in on syntax and philosophy that i'm very much starting to be proud of, i still don't have a proper idea of how to implement even the most basic parser/interpreter for it, not because it's in any way difficult or unusual, but just because... i've never done that before, so i get into weird circular thought paths that produce weird nonsensical code...
... on top of that, i still only have a very, very fuzzy idea of how will it (sometime in extremely distant future) actually implement the most interesting and core feature - event-based continuous (partial) re-parsing of the source code and the fact that traversing the tokens at the leaf level of the syntax tree should result in valid machine code (or at least assembly) that is the "compiled" program.
i *know* it's possible, i just don't yet know enough to have a contrete idea how exactly to achieve it.
but imagine - a programming language where interactive programming is basically the default way of working, and basically the same as normal programming in it, except the act of parsing is also the (in-memory) compilation at the same time, so it's running directly on the hardware instead of via interpretrer/vm/any of that overhead crap.
also then kinda open-source by definition.
and then to "only" write an OS in that, and voilá! a smalltalk-like environment with non-exotic, c-family syntax and actual native performance!
ahhh... <3
* a man can dream *2 -
I got my new(to me) MacBook Pro last week, provided by work. I've got all my setup and config done, for the most part, and I've noticed something.
Performance is shit. Has anyone else noticed this about the 2017 and later models? The 2015 model I had before was much, much smoother. Just zooming windows, a previously butter-smooth experience, is noticeably choppy. I/O performance is garbage too. I have a small iotest script that just writes a string a couple hundred times to disk, deletes it, and repeats this activity 100,000 times. On my Linux machine at home with six year old hardware, this takes about three seconds. On my new system76 laptop it takes just over a second. On the 2017 MacBook Pro, it takes about forty seconds.
The 2017 and 2018 models are a direct downgrade in performance. Why isn't anyone talking about this?10 -
TL;DR; I need your advice regd. a new workhorse of a laptop and ARM/MS Surface10/Laptop6 for this purpose
So my hi-end dell XPS (9350) keeps annoying me with its screen flickering. And it's an 8 year old ultrabook with 16G of RAM that I'm using extensively for development, devops, researching and whatnot. 16GB RAM is also becoming...not enough for all of it.
So I'm passively looking for an upgrade. I like the 13" profile (ultrabook style) and battery life, so I'd like to stay away from gaming laptops.
There have been talks about ARM being the new thing. I always saw ARM as a consumer-grade CPU arch (browsing, movies, music, docs, etc.), but the internet says that the new MS Surface devices will have ARM/Qualcomm built in and can compete with MB Pro in terms of performance (ref.: https://windowscentral.com/hardware...) and they are allegedly released this spring.
I'm not much of a hardware person, I prefer staying on the logical level of things, so I want to ask you, people smarter than me, what do you think? Is it a feasible upgrade for an XPS13 (i7 Skylake/16G RAM/4k touch)? I'll be running code and image builds A LOT, using JetBrains IDEs and doing similar resource-intensive tasks. I don't care at all about GPUs - I don't use them (integrated graphics has always been sufficient).
What else should I consider?
Any alternatives?
P.S. while I can't stand Windows, I actually like MS's hardware. They are good at making it.14 -
I started reading this rant ( https://devrant.com/rants/2449971/... ) by @ddit because when I started reading it I could relate to it, but the further he explained, the lesser relatable it got.
( I started typing this as a comment and now I'm posting this as a rant because I have a very big opinion that wouldn't fit into the character limit for a comment )
I've been thinking about the same problem myself recently but I have very different opinion from yours.
I'm a hard-core linux fan boy - GUI or no GUI ( my opinion might be biased to some extent ). Windows is just shit! It's useless for anything. It's for n00bs. And it's only recently that it even started getting close to power usage.
Windows is good at gaming only because it was the first platform to support gaming outside of video game consoles. Just like it got all of the share of 'computer' viruses ( seesh, you have to be explicit about viruses these days ) because it was the most widely used OS. I think if MacOS invested enough in it, it could easily outperform Windows in terms of gaming performance. They've got both the hardware and the software under their control. It's just that they prefer to focus on 'professionals' rather than gamers.
I agree that the linux GUI world is not that great ( but I think it's slowly getting better ). The non-GUI world compensates for that limitation.
I'm a terminal freak. I use the TTY ( console mode, not a VTE ) even when I have a GUI running ( only for web browsing because TUI browsers can't handle javascript well and we all know what the web is made of today - no more hacking with CSS to do your bidding )
I've been thinking of getting a Mac to do all the basic things that you'd want to do on the internet.
My list :
linux - everything ( hacking power user style )
macOS - normal use ( browsing, streaming, social media, etc )
windows - none actually, but I'll give in for gaming because most games are only supported on Windows.
Phew, I needed another 750-1500 characters to finish my reply.16 -
When you hear “Haskell performance”, what comes to your mind? I was never really interested in Haskell since I had Clojure, and I thought Haskell might be slow.
Haskell with GHC is actually as fast as C or even faster. Haskell runs right on your hardware, no VM or interpreter.
When a program is small, the performance is comparable to C. Sometimes it’s quicker, sometimes not. But when a program is large, Haskell implementation would be faster if you’re not a robot that generates perfect C code.
It’s both very high-order AND very fast. You don’t need math to code in Haskell.
Too bad there are no kewl libraries.12 -
If modern computers have more memory but websites demand more memory, doesn't that defeat the benefit?
For example, let's assume YouTube's HTML-based user interface from 2014 needed 100 MB of RAM per tab. Now, computers might have 4 times the RAM on average, but YouTube's polymer JS-based user interface (UI) might need 400 MB per tab, a proportional increase. In fact, the browser needs to walk through a heavy 10 MB pile of JavaScript before being able to show anything on modern YouTube.
It seems like the higher demands nullify the performance benefit from the increased specifications of modern hardware. Computers get stronger but demands and workloads rise too, so performance isn't improved because some website operators feel the need to show off their "fancy" JavaScript.2 -
Someone said that I should optimize the performance of a website. In fact, the problem was the hardware specs and they still want me to optimize the UI. I have less dependencies, no weird event loops and a JSON that is twice as big because the guys at the backend said they're too lazy to optimize it.
So... It's either they upgrade the specs or blame the guys who made the api response.1 -
New Year's goal: find out if linux vulkan/mesa drivers work on my desktop as there's no real reason to not switch now other than maybe that.
Proton, DXVK and Gallium Nine support like 80% of games now (suck it, nvidia users, we get the good ones!) and i've seen performance improvements after switching from Windows builds on Windows to Windows builds on Proton using the same hardware.
Question is: how the hell do I carry over almost 2TB of shit to the new partitions? Also, Virtualbox is a bitch on Linux as if my kernel updates it's fucked and Boxes is trash...11 -
Design in Motion: Real-Time Rendering's Impact on Architecture
Architecture, a discipline that once relied heavily on blueprints, models, and lengthy render times, has undergone a revolutionary transformation in recent years. The advent of real-time rendering technology has fundamentally altered the way architects visualize, present, and interact with their designs. This paradigm shift has not only enhanced the creative process but has also empowered architects to make more informed decisions and create immersive experiences for clients and stakeholders.
Real-time rendering, a technological marvel that harnesses the power of high-performance graphics hardware and advanced software algorithms, allows architects to generate photorealistic visualizations of their designs in a matter of milliseconds. Gone are the days of waiting hours or even days for a single rendering to complete. This acceleration in rendering time has not only expedited the design process but has also encouraged architects to explore multiple design iterations rapidly.
One of the most significant impacts of real-time rendering on architecture is the ability to visualize a design in various lighting conditions and environmental settings. Architects can now instantly switch between daytime and nighttime lighting scenarios, experiment with different materials, and observe how their designs respond to different seasons or weather conditions. This level of dynamic visualization offers insights into how a building's appearance and functionality evolve throughout the day, contributing to more holistic and thoughtful design solutions.
Moreover, real-time rendering has transformed client presentations. Architectural concepts can now be communicated with unprecedented clarity and realism. Clients can virtually walk through spaces, observing intricate details, exploring different angles, and even experiencing the play of light and shadow in real-time. This immersive experience fosters a deeper understanding of the design intent, enabling clients to provide more targeted feedback and make informed decisions.
The impact of real-time rendering on collaboration within architectural teams cannot be overstated. Traditionally, architects and designers would need to wait for a rendering to complete before discussing design changes or improvements. With real-time rendering, team members can make adjustments on the fly, observing the immediate effects of their decisions. This seamless collaboration not only enhances efficiency but also encourages interdisciplinary collaboration as architects, engineers, and other stakeholders can work together in real-time to refine designs.
The integration of virtual reality (VR) and augmented reality (AR) into the architectural workflow is another transformative aspect of real-time rendering. Architects can now create VR environments that allow clients to step inside their designs and explore every nook and cranny. This not only enhances client engagement but also enables architects to identify potential design flaws or spatial issues that might not be apparent in 2D drawings. AR, on the other hand, overlays digital information onto the physical world, facilitating on-site decision-making and construction supervision.
Real-time rendering's impact extends beyond the design phase. It has proven to be a valuable tool for public engagement and community involvement in architectural projects. By creating virtual walkthroughs of proposed structures, architects can offer the public an opportunity to experience the design before construction begins. This transparency fosters a sense of ownership and allows for constructive feedback, contributing to the development of designs that resonate with the community's needs and aspirations.
The environmental implications of real-time rendering are also noteworthy. The ability to visualize designs in various environmental contexts contributes to more sustainable architecture. Architects can assess how natural light interacts with interior spaces, optimizing energy efficiency and reducing the need for artificial lighting during the day.
In conclusion, real-time rendering has ushered in a new era of architectural design, propelling the industry into a realm of dynamic visualization, immersive experiences, and enhanced collaboration. The ability to witness designs in motion, explore different lighting conditions, and interact with virtual environments has redefined how architects approach their craft. From facilitating client presentations to fostering sustainable design solutions, real-time rendering's impact on architecture is profound and multifaceted. As the technology continues to evolve, architects have an unprecedented opportunity to push the boundaries of creativity, efficiency, and sustainability in the built environment. -
I'm a complete noob with hardware so can someone please help me.
My GPU can support up to 4 monitors. I have 5. I figured that since you don't need a GPU for a computer to work, my PC would be able to support 4 with my GPU and one with the on-board system, but apparently I am wrong.
Is it possible to configure it to work this way? Will this seriously impact performance? (It shouldn't right? As PCs are designed to run with one monitor)
I know it is possible to connect multiple GPUs so if that's not possible, could anyone give me any advice on that? Thanks!13 -
XXXX programming language does not scale. Nope, it is your code. Review your code and improve it! Don't add more hardware to improve performance, that solution just covers the problem. Review, review and review
-
Disclaimer: I'm surprisingly bad with hardware.
Bonus will come in soon and its time for a new laptop. Had my xps13 since ~2016 and I absolutely loved it - the keyboard is amaaazing.. But the display is broken and as its a touch, its ridiculously expensive to replace so that's not an option..
Want to keep it around 1k$ and mostly care about the keyboard being nice, robust case (not plastic), size (no bigger than 13") and storage (at least 1TB SSD).
Ideally it would have an integrated SD slot and at least 1 USB-A...
Any suggestions? I'm gonna put Linux on it anyway so performance just has to be medium.
Thinking about getting another xps but they got a good bit more expensive and I know there's other devices our there nowadays that check the boxes, like thinkpad for example.
Just looking for some opinions, advice and suggestions.
Thanks in advance1 -
I'm currently evaluating the best way to have both a linux distro for work and study and windows for gaming on my PC.
I need as little virtualization as possible on both systems (need to do some high performance computing and access hardware counters for uni and that sweet Ultra Raytracing 144 fps for games) and as mich flexibility in quickly switching between both systems (so dual boot isnt ideal too)
I tried WSL2 but had some issues and am currently trying out a Lubuntu VM on my windows host, but maybe someone knows the secret super cool project that magically makes this unrealistic wish work.7 -
So I love what Apple is doing with the new Macs, great hardware, streamlined optimization, blistering performance.....great stuff.
But here's the thing. I think I'm gonna get a ThinkPad and just install fedora on it. It's far cheaper and outside of only going up to 16GB of ram it meets all my needs.
But between you and I a big factor in why I'm finding it hard to go Mac is the system settings. Like why is it like that? Why when I click something a separate window opens.....odd.6 -
Well, you share ideas with like-minded individuals, you work on what you like and you try to make the most out of it, and maybe get famous on the way (a small bonus). We progress the current technology to the edge using more efficient software. It’s basically answering a question: what is “peak performance” in the blend of hardware and software.
At least that’s how I feel about it.
:P6