For the people kinda worried: this is a highly specialized piece of glass that is extremely complicated to manufacture at present and must, due to the laws of thermodynamics, not be 100% transparent. It's not going to allow surveillance through existing glass installations in any form, just possibly new ones if there's room for the support equipment and through the use of 4-5 digit piles of cash.
Any camera glass like this will have at least a mild tint and will be used in specialty applications. It'll also have pretty horrible SNR, resolution, and low light performance.
Currently the structural component of this tech is mainly used in extremely high end aerospace applications (various heads up display type systems) so it's unlikely you'll ever run across one of these within the next decade.
Nasty remote sensing tech people can be worried about right now: RF surveillance from various combinations of mmWave, wall penetrating radar, and wifi interferometry. Add in the fact that your IPhone has mac randomization but every other device you own including your car's TPMS doesn't. Also Geiger mode lidar is fun, one company I worked for mapped the inside of a random person's house with it as a demo.
>For the people kinda worried: this is a highly specialized piece of glass that is extremely complicated to manufacture at present and must, due to the laws of thermodynamics, not be 100% transparent. It's not going to allow surveillance through existing glass installations in any form
If you're worried that your Airbnb host is going to use it to spy on you (which was mentioned in the article), unless you already scrutinize every nail hole, photo frame, electrical appliance, electrical outlets, smoke detector, etc, this doesn't open up a new vulnerability.
Pinhole cameras with a lens as small as 2mm are already readily available and cheap, no ones going to use an expensive "window camera" to spy on you when they already have so many other options.
Perhaps those that fear government surveillance or other well funded adversaries may have cause for concern, but few of us are in that category.
But does the use of the word pinhole make any sense if it is not a pinhole but a "3.7mm" lense?
Pinhole camera is a specific term used for more than a century now. It is well defined. Now someone from marketing thought "let's call our small camera a pinhole camera". Sry, but that is not a pinhole camera. That camera also exists since ages and is called an endoscope camera.
Words are important in IT and if everyone would just think a bit more about terms they use we would have way less friction with business and less business logic bugs.
>Words are important in IT and if everyone would just think a bit more about terms they use we would have way less friction with business and less business logic bugs.
I wish you the best of luck in your quest to make the English language follow rational rules.
I think people think that light goes through glass like a laser beam nothing in its way. Just flying right through. But it's actually like a pool ball hitting another pool ago and so on. The original photon of light is not what you see on the other side of the glass.
> due to the laws of thermodynamics, not be 100% transparent.
How useful is this statement though? Regular glass isn't 100% transparent either, even in just the visible spectrum. Shouldn't we be more concerned with the delta in the visible spectrum is if we're concerned about easy identification? (before mentioning that plenty of glass is purposefully tinted and dynamic tinting is an application here) And reasonably, couldn't we, theoretically, pick up a decent signal from simply capturing the reflections around the glass edge? I mean we can now do 3D reconstructions from pointing a camera at a mirrored ball. I'm sure it'd be very noisy, but there is signal. I mean to have the capability of projecting you'd have the ability to do the reverse too given that I doubt the internal structure of the glass would be (that) directionally dependent. Right? I can be missing something, it's been awhile since I've done optics.
Thanks. The original description made this seem like far-future technological magic. A system that can somehow analyze a random pane of glass and derive all the transformations needed to use it as a high-precision waveguide? I actually had a manager ask me to develop such a thing, and I asked him how many dozen optics PhDs I could hire to accomplish this feat.
I assure you it is semi opaque. A one way mirror is a primitive example of this. You can make the reflection increasingly transparent but it will be tinted up until the point it doesn't reflect anything.
It may not be noticeably visible to the human eye under most lighting conditions, though.
>> but the example on the CES floor does not have an __apparent__ tint.
> I assure you it is semi opaque
> It may not be __noticeably visible__ to the human eye under most lighting conditions, though.
I'm not sure you two are disagreeing. Seems like you're just reinforcing their point. I mean if someone said glass is opaque and their evidence is that you can't see through glass and your evidence is to point at UV light... well... you'd be technically correct but you're talking from a different ballpark.
Typically people can see better in the dark than imaging sensors can (only considering visible spectrum here), especially small sensors. If you redirect the light somewhere, what would you see? How much light would you need to divert?
If the image is grayscale and has very high gain/noise/ISO, then I imagine with a low enough frame rate you could avoid noticable tinting. You would likely still notice the tint in comparison to regular glass, I'm too lazy to do the napkin math though.
Practically speaking, I would expect it to be strictly more noticeable than glass for holographic projection. If true, it's likely noticeable.
But the eye is fickle. You don't notice the tint of, say, a car window when you've been inside for a bit. Or sunglasses.
I don't think how you framed this is accurate. The quality is a function of the pixel size and of course the exposure time. I mean those are the two main variables that control how much light hits the sensor, but we'd have to get into sensitivity to make direct comparisons (human eyes are VERY sensitive and some evidence that the average eye is sensitive to a single photon, but you're not going to see well with that and this is besides the point).
Definitely not a straight forward calculation and I'm feeling too lazy to go grab my optics books. But guesstimating here, there should be enough light considering you can see light and distortions when looking at the side of a pane. So I'd lean on it being more about the quality of the signal. Which glass is highly structured but I'm assuming that this type of glass has some unique optical properties to specifically make the side panels have higher quality signals. My understanding is that they are projecting from the side, so that's what I'm inferring from -- basically just reverse the photon direction through the medium and I don't have reason to believe that quality is directionally dependent (should I?). My whole spitballing is dependent on this understanding.
> it's likely noticeable.
Well one user said they didn't notice it.
> They state their glass is 70% transparent. That's definitely noticeably opaque.
That's not uncommon for Low-E glass, so maybe not very noticeable, especially in many different environments. But yeah, I think if you compared side by side you'd notice. I think we're just using different criteria and we probably decently align?
Idk, I am mostly thinking aloud here, so I do want to portray that I don't have high confidence here. It's been 10 years since I've been in an optics lab lol. But I've built microphones with really weak signals before and they are useful. Distorted, but useful. (Definitely not fun being forced to run experiments at 3am and find out someone is walking around on the other side of the building...) So I can't see why this couldn't (theoretically) be similarly done for a window? I definitely don't think this would work with a typical pane of glass, especially considering how they're cut, but this does seem like specialty glass specifically built for directing photons directed from an edge to the pane face. Any idea if it can only display to one face? (I'm sure you can invert img but projection face probably matters for a camera?)
Neither do most storefront windows and yet they're often made to reduce the transmission of UV light to protect displayed goods from sunlight or intentionally darkened so they're less transparent when the display is not lit up. You just don't notice it normally or dismiss it as an effect of ordinary glare, which is the point.
I agree that this shouldn't be anywhere near the top of people's privacy concern list. A $1 traditional digital camera can already be hidden very easily and this probably costs thousands of dollars at least if you could even get it.
There are some places where a not-completely 90% transparent piece of glass might not raise too much suspicion, but it could be more difficult to retrofit with a more traditional "pinhole" style spycam.
Like the wall of a shower stall, which you would expect to be less than 100% transparent.
I do think this threat model is less likely, however.
What I think is more likely is a Minority Report style system where there are pervasive cameras everywhere, constantly scanning your face, etc.... Maybe not constantly scanning your retinas and identifying you that way, but facial recognition alone would be more than enough. Either way, once you're recognized, you could be easily tracked silently, or you could be called out publicly by a nearby display.
> It's not going to allow surveillance through existing glass installations in any form, just possibly new ones if there's room for the support equipment and through the use of 4-5 digit piles of cash.
I agree. I had this worrying idea (realisation..?) that one day, maybe triple digit years away, maybe sooner, that tiny cameras and mics the size of grains of salt will be everywhere. They will cost nothing to produce, be self-charging, interconnected to each other and created to ‘reduce crime’ or ‘make you safer’. And in the same way as forever chemicals, you can’t get rid of them. Trillions of them, in fields, the ground. Spreading around stuck to your shoes, on your car tires.
Just a crazy idea, but I think that if they could make that happen today then they would. And that part is the main point - There is no limit to surveillance anymore. I live in the UK, that realisation is in my face every day. Can’t even take a trip to Tesco without being run through facial recognition.
There is no care for the concept of privacy anymore. All the richest companies in the world don’t make their money from caring about our privacy.
I believe I read a science fiction book around this topic. Postsingular by Rudy Rucker I believe, related to nanotechnology. Can't remember most of it, but delved a little into how it affects relationships re: every thing being visible always to everyone. Not sure I'd entirely recommend it, but still interesting to see the thoughts and outcomes others come up with in regard to these types of potential technological changes.
I hadn’t heard of Postingular, so had a quick look at Wikipiedia and the first part I read about it reminded me of a Black Mirror episode with the killer robot bees (a scary but brilliant episode which somehow felt realistic one day!). Will check this out further thanks.
Politicians and high ranking unelected officials also will want privacy so truly omnipresent surveillance would be concomitant with jamming and obfuscation technologies. There’d just be endless arms race.
There will come a time when everyone will face the panopticon though. It’ll be unavoidable, like forever chemicals. I suppose that will be when the machines take over!!
With a sufficient culture shift, I could imagine a world where the capability exists but is outlawed and shunned. Surveillance capitalism, and blanket security Surveillance feel possible to overthrow politically. And to limit somewhat through legislation.
That leaves targeted surveillance, emergency surveillance, and war-time surveillance. Those will probably not be limited, though they are inherently more limited.
As soon as one country implements such tech, all the powerful countries would do the same. It’s not really that far away from the tech we have already, just simpler, smaller and super dispensable and don’t get carried around in our pockets all day long (smartphones).
The article is poorly written, as it only discusses the camera component. Strangely, they chose stock images of holographic and optical displays, but didn’t mention that even once.
1) ZEISS unveils holographic Smart Glass at CES 2024, both for displays/projection/filtering, but also another component which is a holographic camera
2) The holocam works by utilizing coupling, decoupling, and light guiding elements to redirect incident light to a concealed sensor, eliminating the need for visible cutouts or installation spaces in visible areas.
3) ZEISS doesn’t plan to be manufacturer, so other companies can use the tech
> The Holocam technology "uses holographic in-coupling, light guiding and de-coupling elements to redirect the incoming light of a transparent medium to a hidden image sensor."
That suggests, at least to me, that you'll need something more than just a simple sheet of glass. There's probably some engineering required to allow light to be guided and redirected towards what sounds like a typical camera sensor.
Yes I interpret it as guiding light inside the glass such that the sensor is on the lateral aspect of the glass embedded in a glass frame perpendicular to the target image.
Sounds like they embedded a light splitter with some sort of periscope style lense inside a glass pane.
So not actually bolted oblique to the glass as GP suggested.
I don't actually know what that specifically means in this case.
If I were to guess, I'd assume that "de-coupling" means "splitting the spectrum into RGB." RGB, because it's easy to reproduce for human viewing, but you could do this with other spectrums as well.
I imagine "in-coupling" refers to how they get the light to a sensor.
I used to worked for a company that would use lasers to "build" mirrors inside of blocks of glass. They could tune those mirrors to reflect just red, or just blue, or green, or any combo.
I can imagine using this sort of tech to build panels of glass that can route "de-coupled" RGB to travel spectrum specific sensors.
Or, it might be possible that they're just capturing the full spectrum of light, but using microscopic etchings or surface treatments to bounce light down a layer of glass, somehow bouncing "pixels" of light down the pane of glass in a way that isn't lost while capturing other "pixels".
I say "pixels", but what I mean is "a portion of the glass." If you think of the glass as the polished ends of fiber optic strands, all bundled together, each of them routed to a specific sensor pixel - that's sort of what I mean. If you can etch your prisms/combs/gates fine enough, you can have higher "pixel" density from your glass pane.
Then again, they're the lens experts and could be doing something else entirely. I'm really just guessing.
Edit: theres an actually good demo video showing the real state of the tech rather than mockups: https://www.youtube.com/watch?v=NORPeCcIXRQ buried below in the comments so just surfacing higher. everything else is artists lying to you.
---
> Glass surfaces can also generate energy. The microoptical layer in the window pane absorbs incident sunlight and transmits it in concentrated form to a solar cell. This combines the advantages of conventional windows – natural light and an unrestricted view – with the additional benefit of efficient energy production.
Nah, this will only be good enough for sensor-level power (like 5W from a whole window). Only useful in very limited circumstances. It's not going to replace normal solar power.
It is, but do you really want your windows to have integrated cameras? You can't just add this on to a window because the light is transported to the edge of the glass and that is inaccessible in normal windows.
you mean because it can also power a wireless transmitter or a large memory array storage? cuz a hologram that's not "plugged into something" might turn out to not be that useful.
> coupling, decoupling and light guiding elements to divert incident light to a concealed sensor
So, there's a camera in the dash looking up at the windshield and focusing where it expects to see a face, thereby using the windshield as a reflector? And maybe there's some additional etching and deposited films in the windshield to support the angles required?
And perhaps you can put cameras elsewhere, and similarly subtly modify the windshield or other glass to look at other things as well?
Wow. The original article was talking about doorbells and TV broadcast applications. That looks like the quality of the original moon landing broadcast.
>ESA and NASA space missions have carried this trailblazing ZEISS technology on board for many years. It is also well established in the semiconductor and medical technology sectors.
Huh. Seems like it should be fairly easy to find info on then... though some googling around makes me think they might just be referring to their more general diffraction gratings and whatnot.
Could you put a polarizer on this and have that filler out the other side of the display, so a smart window is one way visible? That could make smart glasses and headsets much better.
I doubt you'd even need to. The fact that it's not a uniform grey blob means they're already controlling how the light leaves the device. I suspect it's already directional.
Second, #3 is always a red flag for me. It is sometimes code for "we can do this in the lab but we have no idea how one would manufacture it." A similar analogy is "we've got this great idea for a program you can license but no one here knows how to actually code it up."
Third, the impact of this going mainstream would be hard to underestimate. All those people working on transparent displays like they do in sci-fi movies? Yup they could do that. A video conference system with solid eye contact (mentioned in a couple of places) sure you could do that too. A mirror that could show you wearing different clothes? Yup I could see how that would be coded.
That #3 though. That is what tempers my enthusiasm. Did I miss any announcement that they had a display at CES? (or was it just an announcement) If the former I would seriously consider flying over to Vegas to check this out.
Does Zeiss manufacture any public facing camera currently ? All their general public facing projects are collaborations with Sony or Vivo or other makers, and I think they only manufacture lenses, so nothing with a high quality chip in it.
It's probably another story on the medical side, but this doesn't fit a highly specialized, peofessional only niche with a PC doing the processor on the side.
Ooh thanks for that! Showing monochrome only. I wonder if three layers of glass would get you color? And fixed focus (not too surprising) but that answers two questions I had initially.
As they claim this can also do display (presumably to replace the camera element with a projection element) it would be interesting to see what level of opacity they can achieve in a brightly lit environment.
I don't disagree. One way to know if it can live up to the hype is that it will be the lighthouse feature of the next iPhone which will both not have a notch for the front facing camera, it will do "touch anywhere" touch ID and process all of the screen operating gestures visually rather than capacitively so that you can operate you phone while wearing ski gloves if you want too.
It's a reverse light guide. We've been beam forming for a long time, it's unsurprising that the reverse is possible (imaging through a light pipe).
The principal issue will be gathering enough energy. A well lit source like a bathroom mirror (mirror behind the light guide) could work pretty well I'd wager. If the light guide is too efficient then it will appear opaque, so there is a trade-off.
I find "turns any window" pretty misleading. Unless I'm missing something this needs very special glass or at the very least a special coating/laminate.
For folks worried about privacy, it will almost always be more convenient and cost-effective to install a tiny spy camera somewhere.
Zeiss isn't going to aspire to sell cheap glass on razor thin margins.
> Zeiss isn't going to aspire to sell cheap glass on razor thin margins.
Especially true since Zeiss isn't planning on selling these at all. They're selling licenses to the tech.
I doubt that undermines your privacy argument in any meaningful way however. Even if the license was free, the cost of producing the components is certainly magnitudes higher compared to the cost of current tech that accomplishes similar goals.
I remember a friend on a team adjacent to the surface table team talking about them trying to do this when they switched from projection tables to tables with screens in them. As I recall with the projector based tables (the ones that looked like old cocktail arcade tables) I think they were using an ir camera to detect items and touches on the 'screen'.
Yes, the original Surface tabletop used IR cameras. Its successor, the Samsung SUR40 had in-cell sensing, i.e. photosensors embedded into the LCD so that it could capture a 960x540 image of the table surface.
> It's a reverse light guide. We've been beam forming for a long time, it's unsurprising that the reverse is possible (imaging through a light pipe).
Beamforming in RF frequencies is old, beamforming at optical wavelengths is pretty new shit. Doing the reverse, that is receiving a signal, is brand spanking new fresh out the cow hot shit at any frequency.
No, it's not really new. Small form factor is relatively new but even then the issue lies in yield and quality control.
Projector light engines include potentially many light-beam forming condensing and projection lenses precisely to concentrate light into a uniform quadrilateral. That's not new. The industry continues to advance, though.
Even holographic projection isn't new. This is just that, except the light is (sort of) taking the reverse path.
Technically there is no reason why it can't also project light outwards simultaneously. However light guides aren't really reversible like that: the light usually exits through a small fraction of the guide's external surface area. Reversing that means the entire external surface area is potentially collecting light, which would result in some undesirable caustics in all scenarios I can imagine. Light engines are in part designed to account for this (by reflecting or sinking light into an absorber) but this is still pretty different from what Zeiss is promoting.
For a small permanent installation where you are in control of the lighting I could see this working relatively well, but I have a hard time imagining you could get close anything resembling photo quality without a lot of environmental treatment. Conversely holographic projection is pretty doable on a mobile platform like a headset.
This is pretty new only in that it's probably at least an order of magnitude more difficult to accomplish and thus hasn't been viable up until now. Fundamentally none of the concepts are new, as far as I can tell.
As an example look at your nearest window and imagine it divided into a grid of uniform pixels. Each pixel is a small mirror that reflects a point of focus (wherever you are) to a another, smaller point on an imaging sensor somewhere in the frame. This would look pretty jarring (and jagged) to most people, until you layer a complimentary piece of glass on top of it to make the exterior flush. The two pieces of glass would be high enough precision that once you put them together they are effectively one piece of glass. Ta da.
This is effectively the same process as making any other multi-lensed glass, and that's Zeiss's wheelhouse.
3D print lenses in a header for threads to be arbitrarily placed in a [COMPOUND] lens and pull that feed with AI spatial mapping (yes these are easy now)... EYE
And you can make these in many increments - micro even... "Hey NSa, super simple optical prince Rupert drops on a composite eye" (self destrucing fiber lens when discovered)
As a side hustle, why don’t you start building and selling desktop PC’s to local businesses, competing with HP, Dell, etc on margin? That’s also tech, right?
While building cheap PCs at razor thin margins is adjacent to whatever your tech job is, and probably something you’re able to do, it probably wouldn’t be the most profitable use of your time
Whilst I kind of see where you and the sibling comment is coming from, Zeiss do make glass- they make precision glass. Making cheap glass would be quality diversification, which is not exactly unheard of.
It sounds reasonable, but if they diversified quality, it would mean making precision glass and then also somewhat less precise glass. They will not start mass producing large panes of low margin glass. It's not what they're good at and it's not a lucrative market. Like Intel isn't going to start making jellybean parts.
They could license their name to some existing glass company, but they still wouldn't really be the ones producing it, and I think Zeiss has avoided diluting their brand name like this so far.
Zeiss and Schott are both owned by the Carl Zeiss foundation; the origin story is somewhat covered by Material World by Ed Conway. It's a fantastic book and well worth a read.
Years ago I was able to visit Zeiss in Oberkochen. They had a fantastic headquarters with a few older lithographs. I think a couple of the instruments were 80 millions dollars or so.
There’s a facility across from the Autobahn that was so sensitive trucks going by would throw off their machines. They had to put padding on the autobahn to prevent it. This was after they put the foundation on some kind of suspension. My coworker said they hire the most PHDs in Europe.
You need to attract PHDs and account for logistics. If you want to attract top talent you need to be in a desirable place or be a desirable place with access to desirable places.
Sounds like LIGO, but they're in the US. They had to put the AC unit for their entire facility on suspension because their instruments were so sensitive. And they ask people to not accelerate or decelerate so quickly when driving around the campus.
> Given the current fear around hidden cameras in Airbnbs, the idea of every single window (or even shower door) in a rental property being able to spy on you is a little disconcerting.
While there are some really interesting potential applications for this tech, it is also more than a little disconcerting.
The ubiquity of camera phones and the emergence of tech like those Meta glasses is already pushing us to disconcerting (albeit interesting and in some cases very useful) places, but some of these cutting edge concepts worry me. WiFi seeing through walls also comes to mind…
I'm surprised this isn't the top comment. I'm all for the benefits of this tech, and hadn't even thought about the airbnb style implication.
People didn't like that Google Glass could always be filming, now we don't even have a physical camera.
Rayban/Meta (I believe) have a sensor to detect that the wearer has not attempted to cover the light which shows that the camera is in use, but how will that work when every piece of glass is a camera.
After some searching I found a patent I think may be related to this https://patents.google.com/patent/WO2020225109A1/en because it uses the phrase "Holocam", is German and was filed by Audi (the press release mentions automotive applications as the primary initial use case). It's a translation from German which makes it a bit tougher to parse than the usual patent.
The total lack of any deeper information beyond the bold yet vague claims in the press release is frustrating. The PR makes it sound like a miraculous breakthrough destined to change everything. The source release on the Zeiss site only adds two bits of info.
> "The transparency of the holographic layer has only a minimal effect on the brilliance of the image reproduction. It is also possible to detect spectral components as additional information to complement the visible image. The resulting data provide insights into environmental contamination such as air pollution and UV exposure."
However, experience shows that in reality bold+vague claims like this inevitably come with significant trade-offs and constraints which limit its applications (little things like cost, power, fidelity, size, speed, etc). This is especially true in early implementations of new tech. That said, it may still be both interesting and useful. Unfortunately, we have no way to even think about how it might be useful because Zeiss marketing has chosen to play 'hide the ball' instead of just releasing a technical explainer outlining relevant trade-offs, limitations, etc.
If I was talking to someone from Zeiss my first questions would be about how much the additional components impact the optical characteristics of the glass, what the resolution of the resulting image data is, how large are the components needed at the edges and how far away can they be from the capture zone? Then, of course, how the output of the resulting imaging system maps into traditional camera/lens metrics like f-stops, aperture, imager size/density, gain, focal length, etc. Zeiss is an optics company after all.
For sure, was disappointed with the original article for only linking generic keywords back to their own site. At least the CES one is from the horse's mouth.
> Glass surfaces can also generate energy. The microoptical layer in the window pane absorbs incident sunlight and transmits it in concentrated form to a solar cell. This combines the advantages of conventional windows – natural light and an unrestricted view – with the additional benefit of efficient energy production.
Agree. Obviously a few orders of magnitude in cost reduction would be required... but this seem like it could have interesting potential for energy generation in high-rise buildings, which currently have near-zero solar footprint. With that said, the lack of any mention regarding efficiency makes this part of the press release seem like a bit of smoke and, erm, mirrors.
Do a search on "non line of sight imaging" (aka NLOS). Here's just one public paper.[1] If this is what is being published, what do you think the intelligence agencies have?
> This means that everything from the window in your car to the screen on your laptop to the glass on your front door can now possess an invisible image sensor.
Retailers, marketers, and data brokers are salivating.
I can't wait until the windows in our homes plaster ads over everything every time we look outside.
It'll sure be distracting when it's the windshields of our cars, but I do look forward to the legal drama when companies get sued for painting their "holographic" ads on top of the adspace other people already paid to pollute with their own advertisements.
I'm not sure where you live, but in many places there's strict city/county level ordinances restricting signs (Particularly lit up signs) and advertising. There's a reason people's backyards aren't littered with billboards.
Anyone that's experienced a cracked windshield understands that this won't be going anywhere outside of some very niche and expensive cars.
> Holographic 3D content permits more design, branding, guidance and information functions. For example, side and rear windows can be used for eye-catching Car2X communications. It is also possible to black out window glass or make projected text and images visible only from the inside or outside. Video content is also supported. (https://www.zeiss.com/corporate/en/about-zeiss/present/newsr...)
The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it; moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment.
"Webcams that enable you to look anywhere on your screen."
Minor aside, but does anyone actually care about this? Forever ago, I was told to try and look into the camera in order to project eye contact during video calls, but now that just seems like a cultural hangup that arose from people not being familiar with video calling. Now that it is more ubiquitous, I feel like we all have collectively agreed that the eye contact thing is unnecessary?
It's not just cultural. We've evolved to recognize eye-contact. Newborn babies immediately know when you look at them. Eye contact with a dog communicates dominance.
You've just acclimated to losing that signal during video chats. Bring it back, and you'll have a richer experience.
More generally, we have evolved special neural hardware to recognize gaze, with a special case for when gaze is directed at our eyes. Gaze is important because it's a sign of where people are attending. And as Herb Simon said: "in the information age, attention is the scarce resource."
Consider the possibilities when the screen-glass camera view could be different for each of the participants on your videoconference screen - so as you shift your eyes from one to another, they get different experience of eye contact.
Staring into your camera on a multiparty call means everyone feels like they are getting your eye contact, which devalues it. Eye contact moving between participants is the gel that holds conversations together and it's why video calls are still stuck in the conference call era in terms of talking over one another.
> but now that just seems like a cultural hangup that arose from people not being familiar with video calling.
Eye contact is about as old as humankind. So maybe in 300,000 years we can check if we get over it with video calls.
It is not that communication is not possible without it. It just removes an additional layer.
Eye contact conveys all kind of useful meta data. Depending on the circumstances and other verbal and non-verbal cues it can mean "I'm listening to you", "oh, maybe let's push more on this topic", "have you also caught that?", "perhaps it would be wiser to not talk about that", "i'm specifically addressing you" or "can you help me out here?". And these are just the ones immediately applicable in business settings. Let's not even talk about all the ways it can be used during flirting.
And it works on a subconscious level. I regularly DM role playing games, both online and in-person. The lack of eye contact is a serious impediment in online games. One can use eye contact, or the lack of it to judge engagement. One can signal with eye contact turns or who an NPC is talking to. It can also be used to help less talkative players seize amazing role playing moments.
Can we get by without it? Sure. We can communicate anything using a 1-bit communication channel using morse code. It is a bit like dancing in shackles. Can be done, just leaves something extra on the table.
To clarify I didn't mean that all eye-contact is cultural, I meant specifically during video calls. I was told that if I didn't look into the camera during my interview, it would be perceived as rude or unprofessional.
No, we haven’t agreed that. I continue to feel somewhat rude when not looking at the camera during a conversation. And it matters to nonverbal communication.
I also still habitually look at the camera (due to said training early in my career), but I think that the important thing is I don't think it is rude when other people don't do it. And in my experience, the vast majority of people I interact with don't either, which is why it feels like we have agreed that it is not important.
Idk, I’ve noticed when I’m on sales calls, some sales people make a very conscious effort to look into their webcam, and the effect of them “looking at me” in the video feels:
A)Initially, a little bit odd, because 98% of people don’t do it/ I’m not use to seeing that
B) Then feels pleasantly good/ natural.
I’m not going to make a decision on a performative behavior like looking at the webcam - at least, not consciously - but I’m sure if/ when this becomes the norm, webcams NOT focused on eyes will seem very “off”
I feel this is not an issue with 13 inch laptops. But I recently installed a webcam on my 32 inch desktop monitor and it is really distracting how I'm always looking way below the camera. Currently using Nvidia broadcast to make it better but it does lose tracking if I look at the bottom of the screen.
No, and I'm absolutely not willing to give up the mechanical camera shutter I have on my laptop for that feature. They could add a mechanical shutter to the hidden camera in the frame but why do all of this.
Edit: Should have said "mechanical lens cover" instead of "mechanical camera shutter"
This is an odd, almost reverse nitpick. The word "shutter" is general-purpose. It's literally a thing that shuts the opening. There's one inside an SLR camera, as you point out. There's often one on a webcam. They are common on windows on houses.
All words are general-purpose, but when we arrange them they adopt new meaning. I'm not nitpicking, and just pointing out that the meaning of those words as arranged may not be perceived as the author originally intended.
Alternatively, I'd suggest a different arrangement of words like "lens cover", "webcam cover", "webcam cover slider", or "webcam privacy shield" to be more accurate.
Of course we were able to grock what "mechanical camera shutter" meant in this context, although at first I was confused and wondered if in fact there were webcams with mechanical focal plane or leaf shutters.
This is beyond even science fiction. I could have never imagined something like this was even possible - I still can’t. Is there a demo of this tech in action?
Rough translation of video transcript (YT transcript translated by deepl.com):
Lenses.
This year's theme is now holograms
and it's pretty funny that this is
is called a holocam, and it's pretty funny.
If you look at it right now, if you look at the spring right here.
you can see a little bit of blue
you can see a little bit of blue, which is actually
what is it?
it's actually acting as a camera, so
Now, if you listen to the explanation
what we call holography.
is that it's now reflecting light, so we can gather that light
and you can collect that light and act like a camera
like a camera, so for example, you can now
what?
the meat.
it can recognize the blue color
and only recognize the blue part.
Jace's
holocam. It's quite
interesting.
Yeah it sounds like a great option to place a camera invisibly in the middle top half of your screen so you can actually look a person in the eye when videoconferencing. No more weirdly looking down or to the side for everyone.
I'm sure it can be used for creepiness as well but I see the benefits too.
Given that this will probably only produce a relatively low res, high SNR image, it might be the sweet spot is to use this tech to capture a true eye-contact perspective from the glass in front of a screen, then neural net combine it with images from offset cameras, to create a 'true' through-the-screen view.
Having the screen be the camera 1) removes the need for the holepunch as well as magic island workaround 2) could clearly be used in tandem to improve the effect, even potentially reducing the required computation. Sure, Google also fixes images in post, but that's still not a substitute for better cameras. You're always information limited.
It seems to work fine with glasses for me. Only it loses tracking if I look too far away (bottom of the monitor). Also it thinks I have olive green eyes for some reason.
This doesn't solve the issue, because it looks like the person is always making eye contact! You don't want that—you want it to look like the person is making eye contact only when they are actually making eye contact. Constant eye contact is exhausting.
It does what??? How have I never heard about this before? Is this deployed right now? Is it always enabled? I have so many questions, don't even know where to start. That sounds amazing.
If I look at my camera lens when I'm talking and I look at the screen while you're talking, and then you do the same, then there's no issue. Am I the only one that does this? (I know the answer. I am. At least at my company, I am.)
I never understood bigger and more numerous camera bumps until I used optical zoom and took photos at night. Image quality with thick lenses is incomparably superior to that of thin lenses and 3 image sensors collect 3x as many photons as 1 so you don't have to hold the phone still for such a long time. Ideally the entire backside of my phone would be lenses
I tried to go on holidays with my dslr… what a pain. So heavy (unless you buy a $3000 mirrorless) for photos I never print and the times I print them no one ever take the time to look at them. They stay in a closet until thrown out.
The quality from my iPhone is not that great as a 4K monitor wallpaper compared to my DSLR. But given the fact that it weights nothing and on holidays the iPhone allows me to text, use GPS and google maps, play, read articles, watch videos and allows an immediate editing of my photos in raw and created a shared album right away … it’s a no brainer anymore
Buy any compact camera with a 30x optical zoom and it will take better pictures than a smartphone. Maybe not all of them, but you can zoom on details from a distance, take pictures of animals that would flee if you try to get close, etc. The last one I bought about 10 years ago was about 300 Euro, maybe 200. It has wifi to backup pictures to my phone.
You're right about the strengths of the phone camera, and I also take many more pictures with it. But printing large prints, framing and hanging pictures is absolutely worth doing, much more than printing 4×5s to flip through. I love having them and everyone who comes into my home takes some time to look at them.
Just buy a used mirrorless camera, it’ll set you back a couple hundred, not 3k.
New technology is fun and all, but even decade old digital cameras are more than good enough for good holiday snaps that you might want to crop and print.
I regularly use my DSLR and print photos with my own canon photoprinter. i bought a 70mm-300mm lens for a bronyconvention (galacon) and it is simply impossible to get images of that quality with a phone.
and the 50mm f1.8 is really great considering the low price.
Yes and I manually sort through thousands of sunflower seeds every year to get the good ones for my breeding program but our obscure hobbies aren’t the norm.
For the everyday needs of the average non-enthusiast consumer, I'd argue that an iPhone is simply better than a DSLR or mirrorless - not just more portable or more convenient, but capable of producing reliably better images.
Sure, the DSLR is the obvious choice if you're a serious photographer, but most people aren't serious photographers. If they buy a DSLR or mirrorless camera, they're going to use the kit lens and leave the mode dial on auto. For people who just want to point and shoot, the iPhone's computational brilliance shines through.
The iPhone isn't so much a camera as a generative algorithm that happens to use image sensor data as a prompt. That's infuriating if you're a photographer who just wants full control over a big sensor, but it's tantamount to magic if you don't know what an f-stop is and have no inclination to learn.
I'd bet that if you gave my mother an iPhone and a DSLR, she'd get consistently better images from the iPhone, even if we gave her a one-day crash course on photography first. Sure, she might fluke the odd decent photo with the DSLR, but 90% of the time, the iPhone's algorithmic guesswork is going to beat better imaging hardware with dumber software.
Phone-cameras produce muddy trash. Ever tried to make a good portrait with difficult light? That was the reason i began photographing seriously, because the photos i made with my phone on the galacon a year prior were trash.
And yeah u have to learn how to properly use a dedicated camera which is why i only shoot with manual settings, for casuals phones are enough.
So we should call those shots snapshots when made on a phone. Well, the only thing you need to know when shooting with dslrs is to move the dial to Auto/P. If you want to play with DOF, move it to Av. If you want to shoot the dog, move it to Tv and 1/500. If you want to shoot an airshow move it to 1/1200 or higher. Is that too much? :D
Your 500 euros DSLR sucks compared to my 1500 euros phone for gaming. Do you carry your DSLR in your pocket every time you leave the house? These is obviously a matter of specialized vs general use.
Yeah I seriously doubt that. Not without a lens that costs 3x-4x times the camera. If you spent 500€ on a kit that makes iPhone or Pixel photos look bad... you bought it stolen, lol.
I've read through all the comments here but I still don't have the slightest idea of how this works.
I see some references to "light guides" and words like "coupling" but I don't know what those mean at all, and all of my googling is not helping to explain how light guides embedded in a thin, mostly transparent layer could possibly be used to project a hi-res holographic 3D light field out of a piece of glass. How big is each guide? What is it exactly -- what is it actually made of? How is it shaped? How are they arranged? How are they illuminated? How do you manufacture something like this?
Don't think of it like a transparent camera. Think of the window as a giant external periscope lens, connected to the rest of the camera with such thin fiber optics that you can't notice.
Sure, but how do you build fiber optics like that? Especially so each "pixel" has to shoot out hundreds (?) of separate fibers to cover every possible angle to achieve the holographic effect?
For just a 1MP display, we're talking like hundreds of millions of fibers? That fit in some thin transparent layer? How?
I also am a bit over my head here but I’d guess the major innovation is around manufacturing glass in a way that optic channels are a natural part of the glass’ characteristics as it’s developed. We’re probably talking fiber optics at a molecular level.
> The Holocam technology "uses holographic in-coupling, light guiding and de-coupling elements to redirect the incoming light of a transparent medium to a hidden image sensor."
It looks like it still requires a sensor, it can just be hidden in the frame of the glass.
Sure it has a sensor, it's not magic. But OTOH if it can capture the POV of a real line of eye contact with another image projected _through_ the camera, the effects are transformative, it doesn't matter whether ultimately there is a sensor somewhere or not
They're specifically marketing to car manufacturers in the press release video. Which is unfortunate because I don't need more non-haptic buggy interfaces in my car. The technology itself is amazing even though the holo-cam appears to be just greyscale and rather blurry right now. The ability to embed the optics in/near (hard to tell from the marketing) the edge of the glass is awesome as you don't need a projector that is separate (by much) from the display surface.
Reads like press-release wank. So not a Lytro, but imagining the possibilities of reflection-based computed image reconstruction. They accomplished getting their brand out there without offering anything new. Check out their about us:
> From time to time we also publish advertorials (paid-for editorial content) and sponsored content on the site.
So embedded micro light pipes? Cool idea. I wonder if it creates any kind of noticeable perturbation in the glass. Like if you embedded this in a window and were looking through a portion of the glass that contained these light pipes would you be able to notice them? Might be a problem for putting this type of glass in front of a display.
Am I alone in thinking this is cool but not inherently useful? The only use case where this might be useful is in video calls to make ‘eye contact’ more realistic but I’m sure there are software fixes (altering pupils)
In most cases such as the windscreen you can just have a camera on the frame. Why does it need to utilise the glass itself?
I'm super excited about whether something like this could be used to prevent bird strikes on windows. If anyone wants to grow the market for this technology there are likely square miles worth of glass that could have this installed to prevent birds from being killed.
Given that Zeiss proposes this technology for vehicles, windows, etc., I'm curious about the effects of water droplets in direct contact with the surface changing the index of refraction locally (vs air).
Implants which modulate and reflect incoming radio signals back to the radar device, especially powered by the radio wave itself, is nothing new in clandestine surveillance.
I’ve said many times, eye contact will be the killer app of VR…
… but it will be ironic if 2D beats the VR companies to it, using technology like this.
What an utter failure on the part of Meta that would be. With such a huge head start and they still don’t sell eye contact (except on their Pro device? Maybe?)
Next someone will tell me they have a way to “ssh” into the universe and get process details on every entity in it including every human and what’s happening.
I guess the closest thing you can get to that -- ssh into somewhere and get details on every human -- is to ssh into the government's database on everyone remotely of interest, or even accidentaly connected, (in essence almost everyone), both made of computer and human agent collection and analysis.
Any camera glass like this will have at least a mild tint and will be used in specialty applications. It'll also have pretty horrible SNR, resolution, and low light performance.
Currently the structural component of this tech is mainly used in extremely high end aerospace applications (various heads up display type systems) so it's unlikely you'll ever run across one of these within the next decade.
Nasty remote sensing tech people can be worried about right now: RF surveillance from various combinations of mmWave, wall penetrating radar, and wifi interferometry. Add in the fact that your IPhone has mac randomization but every other device you own including your car's TPMS doesn't. Also Geiger mode lidar is fun, one company I worked for mapped the inside of a random person's house with it as a demo.