Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "parse all the data"
-
Hi, I am a Javascript apprentice. Can you help me with my project?
- Sure! What do you need?
Oh, it’s very simple, I just want to make a static webpage that shows a clock with the real time.
- Wait, why static? Why not dynamic?
I don’t know, I guess it’ll be easier.
- Well, maybe, but that’s boring, and if that’s boring you are not going to put in time, and if you’re not going to put in time, it’s going to be harder; so it’s better to start with something harder in order to make it easier.
You know that doesn’t make sense right?
- When you learn Javascript you’ll get it.
Okay, so I want to parse this date first to make the clock be universal for all the regions.
- You’re not going to do that by yourself right? You know what they say, don’t repeat yourself!
But it’s just two lines.
- Don’t reinvent the wheel!
Literally, Javascript has a built in library for t...
- One component per file!
I’m lost.
- It happens, and you’ll get lost managing your files as well. You should use Webpack or Browserify for managing your modules.
Doesn’t Javascript include that already?
- Yes, but some people still have previous versions of ECMAScript, so it wouldn’t be compatible.
What’s ECMAScript?
- Javascript
Why is it called ECMAScript then?
- It’s called both ways. Anyways, after you install Webpack to manage your modules, you still need a module and dependency manager, such as bower, or node package manager or yarn.
What does that have to do with my page?
- So you can install AngularJS.
What’s AngularJS?
- A Javascript framework that allows you to do complex stuff easily, such as two way data binding!
Oh, that’s great, so if I modify one sentence on a part of the page, it will automatically refresh the other part of the page which is related to the first one and viceversa?
- Exactly! Except two way data binding is not recommended, since you don’t want child components to edit the parent components of your app.
Then why make two way data binding in the first place?
- It’s backed up by Google. You just don’t get it do you?
I have installed AngularJS now, but it seems I have to redefine something called a... directive?
- AngularJS is old now, you should start using Angular, aka Angular 2.
But it’s the same name... wtf! Only 3 minutes have passed since we started talking, how are they in Angular 2 already?
- You mean 3.
2.
- 3.
4?
- 5.
6?
- Exactly.
Okay, I now know Angular 6.0, and use a component based architecture using only a one way data binding, I have read and started using the Design Patterns already described to solve my problem without reinventing the wheel using libraries such as lodash and D3 for a world map visualization of my clock as well as moment to parse the dates correctly. I also used ECMAScript 6 with Babel to secure backwards compatibility.
- That’s good.
Really?
- Yes, except you didn’t concatenate your html into templates that can be under a super Javascript file which can, then, be concatenated along all your Javascript files and finally be minimized in order to reduce latency. And automate all that process using Gulp while testing every single unit of your code using Jasmine or protractor or just the Angular built in unit tester.
I did.
- But did you use TypeScript?36 -
Probably the biggest one in my life.
TL:DR at the bottom
A client wanted to create an online retirement calculator, sounds easy enough , i said sure.
Few days later i get an email with an excel file saying the online version has to work exactly like this and they're on a tight deadline
Having a little experience with excel, i thought eh, what could possibly go wrong, if anything i can take off the calculations from the excel file
I WAS WRONG !!!
17 Sheets, Linking each other, Passing data to each sheet to make the calculation
( Sure they had lot of stuff to calculate, like age, gender, financial group etc etc )
First thing i said to my self was, WHAT THE FREAKING FUCK IS THIS ?, WHAT YEAR IS THIS ?
After messing with it for couple of hours just to get one calculation out of it, i gave up
Thought about making a mysql database with the cell data and making the calculations, but NOOOO.
Whoever made it decided to put each cell a excel calculation ( so even if i manage to get it into a database and recode all the calculations it would be wayyy pass the deadline )
Then i had an epiphany
"What if i could just parse the excel file and get the data ?"
Did a bit of research sure enough there's a php project
( But i think it was outdated and takes about 15-25 seconds to parse, and makes a copy of the original file )
But this seemed like the best option at the time.
So downloaded the library, finished the whole thing, wrote a cron job to delete temporary files, and added a loading spinner for that delay, so people know something is happening
( and had few days to spare )
Sent the demo link to client, they were very happy with it, cause it worked same as their cute little excel file and gave the same result,
It's been live on their website for almost a year now, lot of submissions, no complains
I was feeling bit guilty just after finishing it, cause i could've done better, but not anymore
Sorry for making it so long, to understand the whole thing, you need to know the full story
TL:DR - Replicated the functionality of a 17 sheet excel calculator in php hack-ishly.8 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
A dev team has been spending the past couple of weeks working on a 'generic rule engine' to validate a marketing process. The “Buy 5, get 10% off” kind of promotions.
The UI has all the great bits, drop-downs, various data lookups, etc etc..
What the dev is storing the database is the actual string representation FieldA=“Buy 5, get 10% off” that is “built” from the UI.
Might be OK, but now they want to apply that string to an actual order. Extract ‘5’, the word ‘Buy’ to apply to the purchase quantity rule, ‘10%’ and the word ‘off’ to subtract from the total.
Dev asked me:
Dev: “How can I use reflection to parse the string and determine what are integers, decimals, and percents?”
Me: “That sounds complicated. Why would you do that?”
Dev: “It’s only a string. Parsing it was easy. First we need to know how to extract numbers and be able to compare them.”
Me: “I’ve seen the data structures, wouldn’t it be easier to serialize the objects to JSON and store the string in the database? When you deserialize, you won’t have to parse or do any kind of reflection. You should try to keep the rule behavior as simple as possible. Developing your own tokenizer that relies on reflection and hoping the UI doesn’t change isn’t going to be reliable.”
Dev: “Tokens!...yea…tokens…that’s what we want. I’ll come up with a tokenizing algorithm that can utilize recursion and reflection to extract all the comparable data structures.”
Me: “Wow…uh…no, don’t do that. The UI already has to map the data, just make it easy on yourself and serialize that object. It’s like one line of code to serialize and deserialize.”
Dev: “I don’t know…sounds like magic. Using tokens seems like the more straightforward O-O approach. Thanks anyway.”
I probably getting too old to keep up with these kids, I have no idea what the frack he was talking about. Not sure if they are too smart or I’m too stupid/lazy. Either way, I keeping my name as far away from that project as possible.4 -
Recently for a project I needed to read/write ID3 tags from MP3 files. And after a long search, I found this bloated, monolithic but quite stable library, "getID3".
So, I was looking through the code-base and I found this. This guy literally storing the key value based data embedded as comments within the class file. Then wrote a method to parse the data and even used caching to ensure maximum speed! And such usage is repeated all over the code-base.
So, this is what people used do before arrays were invented :314 -
Want to make someone's life a misery? Here's how.
Don't base your tech stack on any prior knowledge or what's relevant to the problem.
Instead design it around all the latest trends and badges you want to put on your resume because they're frequent key words on job postings.
Once your data goes in, you'll never get it out again. At best you'll be teased with little crumbs of data but never the whole.
I know, here's a genius idea, instead of putting data into a normal data base then using a cache, lets put it all into the cache and by the way it's a volatile cache.
Here's an idea. For something as simple as a single log lets make it use a queue that goes into a queue that goes into another queue that goes into another queue all of which are black boxes. No rhyme of reason, queues are all the rage.
Have you tried: Lets use a new fangled tangle, trust me it's safe, INSERT BIG NAME HERE uses it.
Finally it all gets flushed down into this subterranean cunt of a sewerage system and good luck getting it all out again. It's like hell except it's all shitty instead of all fiery.
All I want is to export one table, a simple log table with a few GB to CSV or heck whatever generic format it supports, that's it.
So I run the export table to file command and off it goes only less than a minute later for timeout commands to start piling up until it aborts. WTF. So then I set the most obvious timeout setting in the client, no change, then another timeout setting on the client, no change, then i try to put it in the client configuration file, no change, then I set the timeout on the export query, no change, then finally I bump the timeouts in the server config, no change, then I find someone has downloaded it from both tucows and apt, but they're using the tucows version so its real config is in /dev/database.xml (don't even ask). I increase that from seconds to a minute, it's still timing out after a minute.
In the end I have to make my own and this involves working out how to parse non-standard binary formatted data structures. It's the umpteenth time I have had to do this.
These aren't some no name solutions and it really terrifies me. All this is doing is taking some access logs, store them in one place then index by timestamp. These things are all meant to be blazing fast but grep is often faster. How the hell is such a trivial thing turned into a series of one nightmare after another? Things that should take a few minutes take days of screwing around. I don't have access logs any more because I can't access them anymore.
The terror of this isn't that it's so awful, it's that all the little kiddies doing all this jazz for the first time and using all these shit wipe buzzword driven approaches have no fucking clue it's not meant to be this difficult. I'm replacing entire tens of thousands to million line enterprise systems with a few hundred lines of code that's faster, more reliable and better in virtually every measurable way time and time again.
This is constant. It's not one offender, it's not one project, it's not one company, it's not one developer, it's the industry standard. It's all over open source software and all over dev shops. Everything is exponentially becoming more bloated and difficult than it needs to be. I'm seeing people pull up a hundred cloud instances for things that'll be happy at home with a few minutes to a week's optimisation efforts. Queries that are N*N and only take a few minutes to turn to LOG(N) but instead people renting out a fucking off huge ass SQL cluster instead that not only costs gobs of money but takes a ton of time maintaining and configuring which isn't going to be done right either.
I think most people are bullshitting when they say they have impostor syndrome but when the trend in technology is to make every fucking little trivial thing a thousand times more complex than it has to be I can see how they'd feel that way. There's so bloody much you need to do that you don't need to do these days that you either can't get anything done right or the smallest thing takes an age.
I have no idea why some people put up with some of these appliances. If you bought a dish washer that made washing dishes even harder than it was before you'd return it to the store.
Every time I see the terms enterprise, fast, big data, scalable, cloud or anything of the like I bang my head on the table. One of these days I'm going to lose my fucking tits.10 -
So I just spent the last few hours trying to get an intro of given Wikipedia articles into my Telegram bot. It turns out that Wikipedia does have an API! But unfortunately it's born as a retard.
First I looked at https://www.mediawiki.org/wiki/API and almost thought that that was a Wikipedia article about API's. I almost skipped right over it on the search results (and it turns out that I should've). Upon opening and reading that, I found a shitload of endpoints that frankly I didn't give a shit about. Come on Wikipedia, just give me the fucking data to read out.
Ctrl-F in that page and I find a tiny little link to https://mediawiki.org/wiki/... which is basically what I needed. There's an example that.. gets the data in XML form. Because JSON is clearly too much to ask for. Are you fucking braindead Wikipedia? If my application was able to parse XML/HTML/whatevers, that would be called a browser. With all due respect but I'm not gonna embed a fucking web browser in a bot. I'll leave that to the Electron "devs" that prefer raping my RAM instead.
OK so after that I found on third-party documentation (always a good sign when that's more useful, isn't it) that it does support JSON. Retardpedia just doesn't use it by default. In fact in the example query that was a parameter that wasn't even in there. Not including something crucial like that surely is a good way to let people know the feature is there. Massive kudos to you Wikipedia.. but not really. But a parameter that was in there - for fucking CORS - that was in there by default and broke the whole goddamn thing unless I REMOVED it. Yeah because CORS is so useful in a goddamn fucking API.
So I finally get to a functioning JSON response, now all that's left is parsing it. Again, I only care about the content on the page. So I curl the endpoint and trim off the bits I don't need with jq... I was left with this monstrosity.
curl "https://en.wikipedia.org/w/api.php/...=*" | jq -r '.query.pages[0].revisions[0].slots.main.content'
Just how far can you nest your JSON Wikipedia? Are you trying to find the limits of jq or something here?!
And THEN.. as an icing on the cake, the result doesn't quite look like JSON, nor does it really look like XML, but it has elements of both. I had no idea what to make of this, especially before I had a chance to look at the exact structured output of that command above (if you just pipe into jq without arguments it's much less readable).
Then a friend of mine mentioned Wikitext. Turns out that Wikipedia's API is not only retarded, even the goddamn output is. What the fuck is Wikitext even? It's the Apple of wikis apparently. Only Wikipedia uses it.
And apparently I'm not the only one who found Wikipedia's API.. irritating to say the least. See e.g. https://utcc.utoronto.ca/~cks/...
Needless to say, my bot will not be getting Wikipedia integration at this point. I've seen enough. How about you make your API not retarded first Wikipedia? And hopefully this rant saves someone else the time required to wade through this clusterfuck.12 -
I've made the json protocol. It's a protocol containing only json. No http or anything.
To parse an json object from a stream, you need a function that returns the length of the first object/array of all your received data. The result of that function is to get the right chunk of the json to deserialize.
For such function, json needs to be parsed, so I wrote that function in C to be used with my C server and Python client. I finally implemented a C function into python function that has a real benefit / use case. Else you had to validate but by bit by the python json parser and that's slow while streaming. Some messages are quite big.
Advantage of this protocol is that it's full duplex.
I'm very happy!36 -
Is your code green?
I've been thinking a lot about this for the past year. There was recently an article on this on slashdot.
I like optimising things to a reasonable degree and avoid bloat. What are some signs of code that isn't green?
* Use of technology that says its fast without real expert review and measurement. Lots of tech out their claims to be fast but actually isn't or is doing so by saturation resources while being inefficient.
* It uses caching. Many might find that counter intuitive. In technology it is surprisingly common to see people scale or cache rather than directly fixing the thing that's watt expensive which is compounded when the cache has weak coverage.
* It uses scaling. Originally scaling was a last resort. The reason is simple, it introduces excessive complexity. Today it's common to see people scale things rather than make them efficient. You end up needing ten instances when a bit of skill could bring you down to one which could scale as well but likely wont need to.
* It uses a non-trivial framework. Frameworks are rarely fast. Most will fall in the range of ten to a thousand times slower in terms of CPU usage. Memory bloat may also force the need for more instances. Frameworks written on already slow high level languages may be especially bad.
* Lacks optimisations for obvious bottlenecks.
* It runs slowly.
* It lacks even basic resource usage measurement.
Unfortunately smells are not enough on their own but are a start. Real measurement and expert review is always the only way to get an idea of if your code is reasonably green.
I find it not uncommon to see things require tens to hundreds to thousands of resources than needed if not more.
In terms of cycles that can be the difference between needing a single core and a thousand cores.
This is common in the industry but it's not because people didn't write everything in assembly. It's usually leaning toward the extreme opposite.
Optimisations are often easy and don't require writing code in binary. In fact the resulting code is often simpler. Excess complexity and inefficient code tend to go hand in hand. Sometimes a code cleaning service is all you need to enhance your green.
I once rewrote a data parsing library that had to parse a hundred MB and was a performance hotspot into C from an interpreted language. I measured it and the results were good. It had been optimised as much as possible in the interpreted version but way still 50 times faster minimum in C.
I recently stumbled upon someone's attempt to do the same and I was able to optimise the interpreted version in five minutes to be twice as fast as the C++ version.
I see opportunity to optimise everywhere in software. A billion KG CO2 could be saved easy if a few green code shops popped up. It's also often a net win. Faster software, lower costs, lower management burden... I'm thinking of starting a consultancy.
The problem is after witnessing the likes of Greta Thunberg then if that's what the next generation has in store then as far as I'm concerned the world can fucking burn and her generation along with it.6 -
Had a bad day at work :( They gave me this code for some obscure streaming job and asked me to complete it. Only after 3 days did I realize that the LLD given to me was incorrect as the data model was updated. Another 2 more days, I was able to debug the code and run it successfully— I was able to parse the tables and generate the required frame but not able to stream it back to the output topic as per the LLD. That’s where I needed help but none of my emails/messages were replied to. The main guy who is pretty technical scheduled a code review session with me— I expected that I would run the code and he would spot it something I might’ve missed and why my streaming function isn’t working. Instead, what happened was that he grilled me on each and every line of the code (which had some obscure tables queried) and then got super mad at me saying “Why are we having this code review session if your code is not complete?”. I’m like bruh, you asked for it, and yes, the main parsing logic is done and I’m just having this issue in the last part. And he’s like “Why didn’t you tell me earlier?”. Wtf?! I left at least 5 emails and a dozen messages. He’s like this has to go live on Monday, and I’m like Ok, I’ll work in the weekend. And he’s like “Don’t tell me all these things! You’re not doing me a favor by working on weekends! How am I to ask my colleagues to connect with you separately on Saturday/Sunday? You should have done the on the weekdays itself. What were you doing this whole week?”. Bruh, I was running the code multiple times and debugging it using print statements. All while you were ignoring my attempts to reach out to you. SMH 🤦♂️ I can go on and on about this whole saga.4
-
Working on an Android app for a client who has a dev team that is developing a web app in with ember js / rails. These folks are "in charge" of the endpoints our app needs to function. Now as a native developer, I'm not a hater of a web apps way of doing things but with this particular app their dev teams seems to think that all programming languages can parse json as dynamically as javascript...
Exhibit A:
- Sample Endpoint Documentation
* GetImportantInfo
* Params: $id // id of info to get details of
* Endpoint: get-info/$id
* Method: GET
* Entity Return {SampleInfoModel}
- Example API calls in desktop REST client
* get-info/1
- response
{
"a" : 0,
"b" : false,
"c" : null
}
* get-info/2
- response
{
"a" : [null, "random date stamp"],
"b" : 3.14,
"c" : {
"z" : false,
"y" : 0.5
}
}
* get-info/3
- response
{
"a" : "false" // yes as a string
"b" : "yellow"
"c" : 1.75
}
Look, I get that js and ruby have dynamic types and a string can become a float can become a Boolean can become a cat can become an anvil. But that mess is very difficult to parse and make sense of in a stack that relies on static types.
After writing a million switch statements with cases like "is Float" or "is String" from kotlin's Any type // alias for java.Object, I throw my hands in the air and tell my boss we need to get on the phone with these folks. He agrees and we schedules a day that their main developer can come to our shop to "show us the ropes".
So the day comes and this guy shows up with his mac book pro and skinny jeans. We begin showing him the different data types coming back and explain how its bad for performance and can lead to bugs in the future if the model structure changes between different call params. He matter of factually has an epiphany and exclaims "OHHHHHH! I got you covered dawg!" and begins click clacking on his laptop to make sense of it all. We decide not to disturb him any more so he can keep working.
3 hours goes by...
He burst out of our conference room shouting "I am the greatest coder in the world! There's no problem I can't solve! Test it now!"
Weary, we begin testing the endpoints in our REST clients....
His magic fix, every single response is a quoted string of json:
example:
- old response
{
"foo" : "bar"
}
- new "improved" response
"{ \"foo\" : \"bar\" }"
smh....8 -
So I was designing a transformer and was looking up toroidal cores. The one I had in mind was made of a material the manufacturer calls “3E5”. Looking for a material datasheet I found a website with a download link for a material “300000”…
Dear web devs: How the fuck is that even possible? I can see what happened here but who thought whatever they were doing there was a good idea?9 -
Can we all just agree to stop actively imagining progressively harder to parse CSV formatting options? For fucks sake I’ve had to build in tolerance for quoted and unquoted data, combined data and split data, ways to split the data and recombine it, compare every data point, filter some data, only add data, only remove data, base data updates on non Boolean fields in the file, set end point matching based on arbitrary fields, column number matching, header matching, manipulate malformed urls and reassemble the file with proper ones, it goes the fuck on. CSV’s should just be simple and not hard to format. Why does everyone want to try so fucking hard to do bizarro shit?!
-
Unreal Engine + third party library adventures part2:
now we gotta parse everything from unreal formats, to the library formats, than back to unreal again... sigh. Why can't it just use normal data formats like everybody else? I mean, come on, strings, ints, vectors etc. they all have standart libraries. why does unreal need to make their own shiet out of it. Just why...1 -
So I thought to myself.
Hey I'll go ahead and use python, it will make this easier than using c++.
So I start looking at python.
And I start looking at specific common functions that c/c++ and .net all offer.
Like writing a fucking png image.
And I start seeing 3rd party libs that are at version 0.2
And so I say, this is supposedly the language data people love. which would include searching gis data too right ?
Everybody touts this level for ai and machine learning and all this other bullshit but I can't even create a fucking image ? And every document points to this same lib where it comes to creating this image ? at version 0.2 ?? 20 years or more after PNG was created ?
So I look up geotiff, and see 0.4........ so..... what is this language good for again ? I can parse json in javascript and do the other things I want...
Oh scatterplot generation ? What is it being displayed in jpeg ? Maybe the jpeg implementation is good. because you know i just use scatterplots constantly. yup. most of the data I require to analyze uses scatterplots. not risk.
fun.
oh and look django.... who the fuck uses django ?
and omg it makes me format my text or the run bombs.....
jesus. rpg much ?
I'm just... I'm not seeing...
WHY ?????????
and then I have zimmermans voice buzzing in my head about just using goddamn .net26 -
So following my previous post, the issue happened again. And actually for background what I've been telling my boss, for years, we need ELK setup and integrated into all our APIs ASAP.
I think it's a punishable crime if any program is released into prod at a tech company with out real time logging/monitoring built in?
So issue still happening, user sent us the request details. So now need to find the actual now that handles the request and look into it's logs to see the details.
Now he's doing it the hard way.... Just finished took 1hr, and the best answer her can come up with is "I think .... Maybe ..."
And if course this is based on infinite data. He stopped after finding a "probably cause"
I have a script that is like promotion ELK, downloads all looks and parsed then so I can run queries to pinpoint the exact call and which log it's in. And can see what's happening around it.
We'll see what my way find but definitely does not take more than 1hr...
Loading data maybe but that's because it needs to download the logs and parse them all...
On a side note, guess I'm Beck on devrant as I have something to rant about. Though it's the same something that I was wanting about years ago... Monkeys...1 -
What are your plans for Christmas?!?!!??
I normally won't engage in societal tropes like pointless, generic, smalltalk or those questions people ask for lack of independent thought/societal trope-isms....
Here's my templated answer this year:
Background = ~2k$ in piles of tech... server upgrades components, apparently the only managed switch left in business/non-custom enterprise networking in the country/indexed for sale
(2k in what I would pay.... my tech sourcing is more base level and +4 years pro exp(yea... since age 8... really))
Foreground.... a shiny ✨️ new, wonderfully discounted for dumb reasons that i appreciate... 10Tb LFF HDD! 🥹🥲🤩
I really like raw data... enough raw data and proper context relevant high-level, custom, precise algorithms and i genuinely believe literally any questions or problems can be quantified and solved for
So... I just keep getting data, life, sourcing, stats on human behaviour... i factor everything
Yes i realise im very odd
//initial context plus curiousities
As parsed out to somewhat tangential commentary below... i cant keep making people go away for societally viewed polite engagement. Therefore, when asked again by factory sales rep who enjoys verbosity and apparent finds me extremely worth his intrigue/personal time
// additional context (and my attempt to be more parse and comment conscious)
With a bunch of initial reveals and launches startjng in a week and technically being the "owner/boss"(cringy to me so Ive officially made my title (anywhere with custom input fields) DragonOverlord...dragons being a tied in theme to all sects and no one can say DragonOverlord isn't a position... as it's clearly a class... unless you find a human more style code ignorant, comment inept, and in need of a very multilingual scribe to create a lexicon 2 steps before my code would be even follow-able without a likely, bad, headache and davinci code like adventure including the improbably well placed wise scholars that just happen to have significant unique and vital information they are willing to freely share with strangers.rant christmas data architecture motivational societal tropes temptation so i can build my database structure loathing python raw data data misanthropy databases49 -
Have to upgrade Android app that connects to Bluetooth device and reads data.
I found the app creates new thread every line read from the device's Bluetooth stream, then reuses 1 object for every reading to store the state of the device in Jsonarray, then every thread creates a new thread to upload the data on cloud. Oh, I almost forget that is all working in one UI sctivity, and the same thread that parse the data updates the UI.
Deadline is next Monday and I don't know where to start editing my shit to place MY shit.
Fuck me I guess? -
So I'm working on a snippet of JS to generate widgets for a custom data dashboard at the moment, in a project where I've been paired with a junior "developer" (he's more of a junior script monkey though), which is just plain painful...
Recently he wrote up a long message bitching about how my library API keeps changing, making it impossible for him to get any of his work done.. This particular message even made references to "writing his own widget library" and "stabbing me in the eye".
It's currently at version 0.1.0-ALPHA, just by the way. Major version 0 mother fucker.
Anyways, one of my colleagues stepped in the other day to try help him with the front-end stuff, which finally helped me get the feedback I was asking for. At which point we found out he's still currently working off a build I gave him 4 fucking weeks back.
Honestly though, I'd both love and hate to see him try make a library to do this: pull data from a non-standards company data API, parse said data from unnamed number arrays nested up to 4 levels deep, then morph that data into one of four different charts or one of five made up of custom markup.
All he has to do is create a UI to configure and present my widgets, but he can't even figure out how to integrate dependency management into his front-end project.
O.o
OMG. Can I stab him?? Pretty please?1 -
The bug: Some string values for an identifier property in the data objects are being sent from our frontend prefixed with a '0'. Sometimes. When it happens, it usually gets stripped away again by the time it's passed to our backend. But not always.
This 0 is never explicitly set anywhere. I even searched for a few variants of " = 0" in both the frontend and backend projects without receiving any results. You might already be suspecting where this is going.
So it turns out.
The data object which holds this value is being initialized in the aspnet (don't ask) backend and passed to the frontend, which then hydrates it. This value is always an integer number, albeit incidentally so which is why string is used as the actual type. When this object is initialized, it's hardcoded with an anonymous type where this property is set as int because I guess someone figured "it's always an int though". Being a typed language, primitive scalars can't be null objects which means the property's value becomes the concrete int 0.
Okay weird. I can think of better ways of doing this but let's just set it to string as I can't start overhauling things right now. Let's just go find where this value is somehow concatenated into the incoming parameter.
You see, this happens because at the point where the frontend sets this value, it may be an int or string depending on where it came from, and I guess someone figured that in order to cast it to string you just go prop += arg seeing as the prop is empty string and all. Because explicitly casting it or - as much as I get a rash whenever I see it - going prop = "" + arg would be too verbose and unoriginal.
Bonus round: How come the 0 only sometimes made it all the way to our backend? The thing is that this bug has been fixed before. The fix is that because this string is "always" an int, you can parse it to int before passing it to the backend in case it has leading zeroes. This path is only taken in certain views because someone forgot to copypaste their fix into all the places this is repeated.
Sometimes you find a bug and you are just somehow more grumpy after fixing it.1 -
In regards to my last string of posts regarding react and Auth, I got it working the bearer token is being passed but now just getting XML errors every time I submit a form. All the data being passed is JSON. I've created a stackoverflow question https://stackoverflow.com/questions... as I'm getting nowhere and SO really isn't helping either. So if anyone wants to take a stab, go for it.11
-
For a little background on the sort of stuff I'm dealing with, check out my last rant.
Anyways, I'm testing this pipeline at work and was just reminded of the fucktarded way a "software engineer", who had a bachelor's biology degree, decided to handle a json file.
The script is question is loading a json file containing an array of objects. The script is written in perl. There's a JSON module. Use that? Fuck no! Let's rather perform an in-place sed command on the file substituting the commas separating objects in the array with newlines, then proceed to read the file line-by-line and parse out the tokens manually. Mind you, in the process of adding the newlines he didn't keep the commas, so now all of these json files his bullshit handled are invalid json that cannot be parsed.
The dumb ass was lucky the data in the file is always output upstream as a single line and the tokens for each object are always in the same order, so that never led to problems. But now, months later after I fixed his stupidity I am being reminded of it again as I'm testing and debugging some old projects as part of regression testing new changes I'm making.
TL;DR Fuck dumbwit motherfuckers who can't even google search "parsing a json file" and doing literally anything that is less fucktarded than manually parsing a json file2 -
Today I started a project in which I must parse and extract some features from orders. Features can be product names, options, custom data and more and then do some validations/processing.
The (main) problem ? All I have is a String per order and of course most of the product/options have either change or been deleted.
I want to sudo rm -rf myself 😞 -
So I’ve been recording the same project into dr who bullshit meets the invasion of the body snatchers
If I were sleeping and motivated to finish something some bastardized people would steal it would be so simple to keep progressing through it and it would be contributed allowing people to write their implementations to parse and index Gis formats and integrate a lot of valuable data regarding travel and survivable conditions
Except for all the above mentioned bs which cuts my goddamn motivation down to next to nothing
And no one else seems to see a problem with this across the goddamn board because the pods already sucker their innards out13 -
Maybe I am picky, but:
Some people made a "FileStorage" API where in the open method you have flags like READ, WRITE, APPEND... And they made it like when you check WRITE the API opens/overwites to an empty file.
And when someon made a github Issue about this behaviour, they flagged it as a feature request. I'm so anoyed by not being able to overwite my data, thats just rude. Like should I use an file storage API to overwrite data like this:
0- Save file inner structure (you can't extract it from the API)
1- Open the file, parse the structure
2- Find file to overwrite
3- Save all the mess again
4- write your own API
5- sigh4