2D To 3D AI Startup Was Actually Humans Doing the Work Manually – Slashdot

Follow Slashdot blog updates by subscribing to our blog RSS feed




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
What’s old [wikipedia.org] is new again.
An AI company overstating their claims? Say it isn’t so!
Really, every AI company dramatically over-states their claims, but an outright mechanical Turk really takes the cake.
I’m a little curious as to what their end-goal was. (I did not RTFA, in accordance with local custom.) Were they hoping to get enough training data to actually do the thing they said they could do? Were they hoping to quickly sell to the least diligent buyer in history?
They were probably hoping for billions to come pouring out of investor wallets in some ridiculous IPO scam 6 months from now after the Hype and Clickbait marketing team sold the market a convincing story to feed a stock ticker for just long enough to cash out the stock options.
The lies may be feeding a lot more lemmings these days, but Greed’s motivation hasn’t changed. Neither has a corrupt market.
When I was working in the self driving cars industry, this is how our cars could be ‘driverless’: someone would make key decisions for the care several times per mile from a desk located a few miles from the car … and this was on a relatively short loop.
When I see such a car on the road I still stay away from it, if not to avoid potential crashes but to avoid potential eye damage:
https://cleantechnica.com/2021… [cleantechnica.com]

“An artificial intelligence company, whose founder Forbes included in a 30 Under 30 list recently…”

“An artificial intelligence company, whose founder Forbes included in a 30 Under 30 list recently…”
Elizabeth Holmes was 19 years old when she founded Theranos.
SBF ran a rather infamous Bahamian operation consisting of mainly twentysomethings.
The Forbes Under 30 list is starting to sound like America trying to convince America there’s real value from a teenage voter.
AI-controlled organic wetware is the next big thing
At this point I respect the grifters more than their chumps.

At this point I respect the grifters more than their chumps.

At this point I respect the grifters more than their chumps.
Well, except for the fact that many of the most gullible chumps are decision-makers who are laying off people and attempting to replace them with the aforementioned AI. Even if those people eventually get hired back (or hired elsewhere), it’s those people whose lives are being disrupted.
It’s not like AI is replacing the managers…
It was 2001, and as part of my undergrad education, listened to a presentation by a University of Illinois professor detailing how he (or rather his grad students) had automated the process of creating 3D models from pairs of 2D images.
3D scene reconstruction from 2D images is already a solved problem in computer science. If you’re hiring actual artists to do this for you, may I suggest hiring a CS grad instead?

It was 2001, and as part of my undergrad education, listened to a presentation by a University of Illinois professor detailing how he (or rather his grad students) had automated the process of creating 3D models from pairs of 2D images.

3D scene reconstruction from 2D images is already a solved problem in computer science. If you’re hiring actual artists to do this for you, may I suggest hiring a CS grad instead?

It was 2001, and as part of my undergrad education, listened to a presentation by a University of Illinois professor detailing how he (or rather his grad students) had automated the process of creating 3D models from pairs of 2D images.
3D scene reconstruction from 2D images is already a solved problem in computer science. If you’re hiring actual artists to do this for you, may I suggest hiring a CS grad instead?
From multiple 2D images that are nice, clean, and extremely consistent, or from single 2D images with significant clues (like shading). But especially when dealing with slight inconsistencies it turns into a nasty problem fairly quickly. Just look at 3d scanner apps [makeuseof.com], they work decently given a few dozen clear photos of a simple image. But try to scan something slightly more complicated like someone trying to hold their hand still and you get gibberish. You definitely need AI to make a 3D image based a handful of illustrations.
From the docs for this start up actually smells legit. For instance they want multiple images, but don’t require it [kaedim3d.com], which is what I’d expect for a modern era 2D -> 3D transformer model.
And FTA:
“Surely to quality control you actually need something to judge the quality of,” they said, before adding that some workers only saw the initial 2D image a client had submitted and not an output generated by the AI.
Note the “some workers” bit. In other words a lot of workers are seeing the first-pass AI generated model.
So this start up is doing pretty much exactly what I’d expect. Use some initial AI to do the 2D -> 3D when they can, and then have artists fill in the gaps, which at this point are massive.
The current page [kaedim3d.com] seems transparent about this:
“Kaedim’s machine learning and in-house art team combine to deliver production-quality assets in minutes.”
But a version from July 25th [archive.org] makes no mention (and it has the “magic” quote mentioned in the article). So I suspect they were hiding the human assistant and updated the web page when they got caught.
I think there’s a substantial difference between the amateur-level, “works on any phone” type of application which chokes on complicated scenery, and a professional tool with calibrated cameras, known perspective, known poses, etc… Finding the corner cases for software algorithms, especially 3D problems (which tend to be asymptotic), is not difficult, but once someone understands how and why the software behaves as it does, success becomes a matter of experiential knowledge.
Because this was aimed at p

I think there’s a substantial difference between the amateur-level, “works on any phone” type of application which chokes on complicated scenery, and a professional tool with calibrated cameras, known perspective, known poses, etc… Finding the corner cases for software algorithms, especially 3D problems (which tend to be asymptotic), is not difficult, but once someone understands how and why the software behaves as it does, success becomes a matter of experiential knowledge.

I think there’s a substantial difference between the amateur-level, “works on any phone” type of application which chokes on complicated scenery, and a professional tool with calibrated cameras, known perspective, known poses, etc… Finding the corner cases for software algorithms, especially 3D problems (which tend to be asymptotic), is not difficult, but once someone understands how and why the software behaves as it does, success becomes a matter of experiential knowledge.
The scenery when you’re dealing with a phone app is going to be tricky to get rid of (not knowing the linear algebra behind it myself) but otherwise not having calibrated cameras, known perspective, known poses, etc… is a problem made worse, not better, working with illustrations.

Because this was aimed at professionals, it is reasonable that they could expect their users to undergo some training to develop their proficiency. A professional artist might reasonably expect to paint a few studies of a subject – perhaps a few hours to a dozen or so hours – before incorporating the subject into a greater work. And this is just for a single painting. The principles of getting good results from a 2D-3D transform are not difficult to understand, and it is often much easier to train the person (i.e., palm pilot and handwriting recognition) than get a computer algorithm to work with sub-optimal positioning. When you understand the principles of how the computer will attempt to reconstruct a 3D model from your images, it becomes easier to pose or position the camera in such a way that the algorithm has the easiest time reconstructing the set of 3D points making up the image.

Because this was aimed at professionals, it is reasonable that they could expect their users to undergo some training to develop their proficiency. A professional artist might reasonably expect to paint a few studies of a subject – perhaps a few hours to a dozen or so hours – before incorporating the subject into a greater work. And this is just for a single painting. The principles of getting good results from a 2D-3D transform are not difficult to understand, and it is often much easier to train the person (i.e., palm pilot and handwriting recognition) than get a computer algorithm to work with sub-optimal positioning. When you understand the principles of how the computer will attempt to reconstruct a 3D model from your images, it becomes easier to pose or position the camera in such a way that the algorithm has the easiest time reconstructing the set of 3D points making up the image.
An algorithmic 2D-3D transform is going to want the exact same 3D model from different camera angles.
An illustrator doing a study is going to want the character doing different 3D poses from different camera angles.
Those are very
We cant be having too much Human in our Artificial Intelligence…. false advertising.
I propose a new update to Betteridge’s Law: Any headline in popular media talking about AI can be answered with “no”. No, it doesn’t work that way. No, that’s a dumb idea. No, AI isn’t capable of that. No, you have no idea what AI is.
I’m sure it could be generalized as “Any headline talking about the latest tech buzzword…”. Blockchain is another excellent example.
Furthermore it can be assumed that the reporter completely misunderstood what was actually being said, or the person saying it is running a scam. Or both.
I remember when “database” was the new buzzword. All of a sudden, any collection of files was now a “database”, whether there was an RDBMS using them or not.
That’s the way I still use the word. Any collection of data is a database, regardless of how it’s accessed. It could be because I’m (usually) not directly involved with the storage of the data. I access it via abstract methods that could as easily apply to an SQL DB, a key/value store, or even just a plain text doc and grep. I say “get data matching these parameters” and it appears. I don’t care about the implementation.
The summary and article intentionally use language to imply scandalous “Theranos” style fraud by begging the reader to make inductive assumptions.
OK, shady as hell if they were telling people they were getting AI generated results when they were human generated… but beyond that… what is the news here?
Using humans as quality control is only logical. AI company or not. No company will spend money without a reason.
It’s not my fault you’re stupid. That you assumed he was the only person working there and computers were the next magic. Maybe try understanding what you’re claiming to write about before your clickbait stupidity. Man I gotta get of
Artificial artificial intelligence?
It’s like Victor/Victoria — only with computers.
I have a neat little startup to test AI use in creatures. It may have some bugs at the beginning but after a few Agile sprints I’ll probably have those fixed.
— God
how AI companies can sometimes overstate the capabilities of their technology…
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Roku Laying Off 10% of Employees, Will Take up to $65 Million Charge To Remove Streaming Content
Pentagon Plans Vast AI Fleet To Counter China Threat
It’s hard to think of you as the end result of millions of years of evolution.

source

Jesse
https://playwithchatgtp.com