This is especially true in mature codebases, when figuring out what exactly needs to be done is the toughest part and coding is the simplest. When you have worked on such codebase for several years and have a complete mental model on the inner works, explaining to AI what you already have in mind is a waste of time.
It's an anti-waste measure: a large number of supplied chargers are essentially thown straight in the bin because people already have a compatible charger.
Flutter was good, but now with Liquid Glass™ I find react native (specifically expo) using expo-ui far far better at designing apps that match their native look and feel.
> Any suggestion how can I add such color visualization across body of work of a given artist or a style?
I think that the best visualization of the colors of a painting is by using the painting itself.
In class I demonstrate lightness by first desaturating the work (or copy pasting the L in Lab). I then do a controlled posterize on the image - basically a stepped curve in Photoshop. I try to isolate the dark, middle and light. These are relative values that can often manifest as lumps in the histogram. Painters tend to be very deliberate in the way they organize them. This page explains what I am getting at:
In my experience such posterizing is best done manually but AI might be able to do it.
Hue at saturation are more difficult for the simple reason that they are difficult to disengage from lightness.
Like lightness, saturation is generally organized according to low, middle and high. For most of art history, the saturation of a painting would closely follow its lightness. It was Gericault who separated them. Check out the saturation vs lightness of his Lobster painting for an example of this.
Hue is a beast. Sure, most paintings done before the impressionists are pretty unsaturated. But even Rembrandt would be careful to use a red brown against a green brown. Check out the Rembrandt image on this page to see this in action.
I think that a radial histogram is the best way to visualize hue. This would show not only the hues but there relationship to each other on the RYB wheel and also there quantity. There should be a cut-off point for hue that is visible. In our work we established a cut off - all hues with very low saturation were ignored.
Some of GDPR's language around consent for data processing (which, I will note, you only need if you don't have a legitimate and expected purpose for storing and processing it!) has implications for friction: many 'cookie popups' are not compliant because they make not giving consent harder than giving consent.
But deletion requests are not so strong: if you make people really jump through hoops then you might get in some trouble, but the expencted standard is basically at 'sending an email and getting a result within 30 days'.
> (…) founded CNN, a pioneering 24-hour network that revolutionized television news (…)
> (…) audacious vision to deliver news from around the world in real time, at all hours (…)
And thus marked the beginning of the end. 24-hour news, like social media, are a net negative for society. Networks have to keep making shit up to pad the never-ending run time, and they’re always bad news, making the world seem worse than it is and radicalising more people. It’s a version of doomscrolling where you don’t even have to scroll. It’s social media where only one a few people can post and their only goal is engagement.
Seems like he did good things in his life, and even here I don’t think he could’ve predicted (or even intended) the negative effects of this invention, but it doesn’t mean it should be celebrated (though this is on CNN’s website, so of course they will). I wonder if, like Nobel, he eventually realised the thing he created did more harm than good.
Doctors have to undergo minor professional development refreshers — not replace their entire education. There is a reason we educate early in life; it's hard to retrain the old (and expensive or even approaching impossible).
The condensation argument is totally true.... Strikes me though the other metric Id look at is how long code survives before being re-written. Feels like for that one a bit early to tell...
From the shutting down of USAID, there are death toll estimates reaching over 100,000 dead children and climbing. It achieved a level of "plausible deniability" combined with total disregard of human life, resulting in what can be argued as mass murder, that has rarely ever been seen in human history.
Yeah right, they just accidentally massively profit from it. Come on dude, Valve has behavioral psychologists on staff. They don't just accidentally abuse players.
The flaw with that article, it being the Beeb showing their bias, is that it mainly applies to the English Home Counties.
So it is a southern English habit, not a British one. The other parts of England are more direct, and will use more obvious phrasing. Similarly the other parts of Britain will be more direct.
You're framing this as an ethical question, but copyright term lengths are essentially arbitrary. They're set by the government, as are the boundaries of fair use. At which point you're making a circular argument. That it's bad if it's illegal and that it should be illegal because it's bad. So what happens if someone argues the opposite? That it's not unethical if it's fair use and then it should be fair use because it's not unethical.
>If you can go from producing 200 lines of code a day to 2,000 lines of code a day, what else breaks? The entire software development lifecycle was, it turns out, designed around the idea that it takes a day to produce a few hundred lines of code. And now it doesn’t.
How is producing more lines of code any good? How does quality assurance work with immeasurable code bloat? I want good software not slopware with 2000 different features. A good product does few things, but does these really well. There is no need to constantly add lines of code to a working product.
There isn't much they can do to foster native Linux support beyond trying to increase the number of people gaming on Linux. It's a chicken-and-egg problem, and you need to make the platform desirable to developers before they will start developing for it.
You don't know buzzword A, B, C? Heh, he must be incompetent and know nothing.
The buzzwords mean nothing, really. The math is the same for a stupid or a smart model, because the model is trying to mimic properties of the training dataset.
You can give me the ultimate model architecture that will beat every model in existence and I can still figure out a way to make it perform worse than what's available today, but you're not even doing that, you're just drumming up some old news.
If someone "threatened" me with tech advancements I would be more worried about things like an imminent massive drop in token costs for bigger context windows or other game changers like continual learning where the model internalizes your code base into its weights rather than just keeping it in its context.
Seems like something the ai could help you with - ask it in the prompt to return an error if the submitted article title doesn’t seem like a whimsical fake encyclopedia article title