Monday, March 18, 2024

Nikon to acquire RED.com

From Nikon newsroom: https://www.nikon.com/company/news/2024/0307_01.html

Nikon to Acquire US Cinema Camera Manufacturer RED.com, LLC

March 7, 2024

TOKYO - Nikon Corporation (Nikon) hereby announces its entry into an agreement to acquire 100% of the outstanding membership interests of RED.com, LLC (RED) whereby RED will become a wholly-owned subsidiary of Nikon, pursuant to a Membership Interest Purchase Agreement with Mr. James Jannard, its founder, and Mr. Jarred Land, its current President, subject to the satisfaction of certain closing conditions thereunder.

Since its establishment in 2005, RED has been at the forefront of digital cinema cameras, introducing industry-defining products such as the original RED ONE 4K to the cutting-edge V-RAPTOR [X] with its proprietary RAW compression technology. RED's contributions to the film industry have not only earned it an Academy Award but have also made it the camera of choice for numerous Hollywood productions, celebrated by directors and cinematographers worldwide for its commitment to innovation and image quality optimized for the highest levels of filmmaking and video production.

This agreement was reached as a result of the mutual desires of Nikon and RED to meet the customers’ needs and offer exceptional user experiences that exceed expectations, merging the strengths of both companies. Nikon's expertise in product development, exceptional reliability, and know-how in image processing, as well as optical technology and user interface along with RED’s knowledge in cinema cameras, including unique image compression technology and color science, will enable the development of distinctive products in the professional digital cinema camera market.

Nikon will leverage this acquisition to expand the fast-growing professional digital cinema camera market, building on both companies' business foundations and networks, promising an exciting future of product development that will continue to push the boundaries of what is possible in film and video production.

Sunday, March 17, 2024

Job Postings - Week of 17 March 2024

WeRide

Camera Sensor Engineer

San Jose, California, USA

Link

ISDI

Image Sensor Engineer

London, England, UK

Link

HRL Laboratories

Focal Plane Engineer

Camarillo, California, USA

Link

HRL Laboratories

Senior Infrared Detector Research Scientist

Camarillo, California, USA

Link

Paul Scherrer Institute

Postdoctoral Fellow in detector development

Villigen, Switzerland

Link

Kappa Optronics

Engineer for image sensor and camera technology

Göttingen, Germany

Link

Caeleste

Characterization Engineer

Mechelen, Belgium

Link

University of Amsterdam - NIKHEF

Postdoc position in ALICE and Detector R&D for Experimental Particle Physics

Amsterdam, Netherlands

Link

GE Healthcare

Detector Mechanical Engineer

Hino, Tokyo, Japan

Link

Friday, March 15, 2024

Three New Videos from Photonis

Photonis has released new videos describing the latest improvements in its image intensifiers.  

A little background might be useful to those with little exposure to image intensifiers.

First, Photonis itself. Those of you who are interested in the whole complex story can find it here. The original Photonis was a renamed spinoff of Philips that subsequently acquired a few other companies including Burle, the renamed spinoff of RCA's vacuum tube operation. Recently, the Photonis Group renamed itself Exosens but still uses Photonis as the brand for its image intensifiers.

Image intensifiers are vacuum tubes that have at one end a surface that emits electrons on receipt of photons, some sort of acceleration and electron multiplication mechanism and a phosphor at the other end to produce a brighter visible image. As new developments have been applied to intensifiers, various generations have been assigned.

Gen 0 - See this (somewhat irreverent) link. (Not real, of course.) Sometimes the first low-gain tubes are called Gen 0.

Gen 1 - Light hitting an alkali photocathode produces electrons that are accelerated and electrostatically focused by a metal cone on to a curved phosphor. These invert the image, which is reverted by the optics - 1930s - 1960s.

Gen 2 - Proximity-focused electrons from the photocathode hit a microchannel plate in which they are multiplied. The electron output is proximity-focused on a flat phosphor. Some of these still have the focusing cone to provide image inversion. 1970s

Gen 3 - The alkali photocathode is replaced by a cesium-coated gallium arsenide membrane. 1970s-1990s

Gen 4 - Photocathode improvements of various types and, typically, electronic gating. Strictly speaking, these are still Gen 3. 2000s+

The videos showing tubes Photonis characterizes as Gen 4+:

1 - Demonstration of electronic gating

2 - Demonstration of performance

3 - Demonstration of halo improvements





Thursday, March 14, 2024

IEEE ICCP 2024 Call for Papers, Submission Deadline March 22, 2024

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2024
https://iccp-conference.org/iccp2024/call-for-papers/
Submission Deadline: March 22, 2024 @ 23:59 CET

ICCP is an international venue for disseminating and discussing new scholarly work in computational photography, novel imaging, sensors and optics techniques. This year, ICCP will take place in EPFL, Lausanne Switzerland, on July 22-24th!

As in previous years, ICCP is coordinating with the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) for a special issue on Computational Photography to be published after the conference.

 ICCP 2024 seeks novel and high-quality submissions in all areas of computational photography, including, but not limited to:

  •  High-performance imaging.
  •  Computational cameras, illumination, and displays.
  •  Advanced image and video processing.
  •  Integration of imaging, physics, and machine learning.
  •  Organizing and exploiting photo / video collections.
  •  Structured light and time-of-flight imaging.
  •  Appearance, shape, and illumination capture.
  •  Computational optics (wavefront coding, digital holography, compressive sensing, etc.).
  •  Sensor and illumination hardware.
  •  Imaging models and limits
  •  Physics-based rendering, neural rendering, and differentiable rendering.
  •  Applications: imaging on mobile platforms, scientific imaging, medicine and biology, user interfaces, AR/VR systems.

Learn more on the ICCP 2024 website, and submit your latest advancements by Friday, 22nd March, 2024.

The call for posters and demo will be published soon with a deadline end of April. It will also be a great opportunity to advertise your work.

 



Wednesday, March 13, 2024

Prophesee Qualcomm demo at Mobile World Congress

Prophesee and Qualcomm recently showcased their "blur free" mobile photography technology at the Mobile World Congress in Barcelona.

Press release: https://prophesee-1.reportablenews.com/pr/prophesee-s-metavision-image-deblur-solution-for-smartphones-is-now-production-ready-seamlessly-optimized-for-the-snapdragon-8-gen-3-mobile-platform

February 27, 2024 – Paris, France - Prophesee SA, inventor of the most advanced neuromorphic vision systems, today announced that the progress achieved through its collaboration with Qualcomm Technologies, Inc. has now reached production stage. A live demo during Mobile World Congress Barcelona is showcasing Prophesee’s native compatibility with premium Snapdragon® mobile platforms, bringing the speed, efficiency, and quality of neuromorphic-enabled vision to cameras in mobile devices.

Prophesee’s event-based Metavision sensors and AI, optimized for use with Snapdragon platforms now brings motion blur cancellation and overall image quality to unprecedented levels, especially in the most challenging scenarios faced by conventional frame-based RGB sensors, fast-moving and low-light scenes.

“We have made significant progress since we announced this collaboration in February 2023, achieving the technical milestones that demonstrate the impressive impact on image quality our event-based technology has in mobile devices containing Snapdragon mobile platforms. As a result, our Metavision Deblur solution has now reached production readiness,” said Luca Verre, CEO and co-founder of Prophesee. “We look forward to unleashing the next generation of Smartphone's photography and video with Prophesee's Metavision.”

“Qualcomm Technologies is thrilled to continue our strong collaboration with Prophesee, joining efforts to efficiently optimize Prophesee’s event-based Metavision technology for use with our flagship Snapdragon 8 Gen 3 Mobile Platform. This will deliver significant enhancements to image quality and bring new features enabled by event cameras’ shutter-free capability to devices powered by Snapdragon mobile platforms,” said Judd Heape, VP of Product Management at Qualcomm Technologies, Inc.

How it works
Prophesee’s breakthrough sensors add a new sensing dimension to mobile photography. They change the paradigm in traditional image capture by focusing only on changes in a scene, pixel by pixel, continuously, at extreme speeds.

Each pixel in the Metavision sensor embeds a logic core, enabling it to act as a neuron.
They each activate themselves intelligently and asynchronously depending on the amount of photons they sense. A pixel activating itself is called an event. In essence, events are driven by the scene’s dynamics, not an arbitrary clock anymore, so the acquisition speed always matches the actual scene dynamics.

High-performance event-based deblurring is achieved by synchronizing a frame-based and Prophesee’s event-based sensor. The system then fills the gaps between and inside the frames with microsecond events to algorithmically extract pure motion information and repair motion blur.
Learn more: https://www.prophesee.ai/event-based-vision-mobile/

Monday, March 11, 2024

Preprint on "Skipper-in-CMOS" image sensor

A recent preprint on ArXiv https://arxiv.org/abs/2402.12516 titled presents a new CMOS image sensor designed to achieve sub-electron read noise and photon number resolving capability.

Skipper-in-CMOS: Non-Destructive Readout with Sub-Electron Noise Performance for Pixel Detectors

Abstract: The Skipper-in-CMOS image sensor integrates the non-destructive readout capability of Skipper Charge Coupled Devices (Skipper-CCDs) with the high conversion gain of a pinned photodiode in a CMOS imaging process, while taking advantage of in-pixel signal processing. This allows both single photon counting as well as high frame rate readout through highly parallel processing. The first results obtained from a 15 x 15 um^2 pixel cell of a Skipper-in-CMOS sensor fabricated in Tower Semiconductor's commercial 180 nm CMOS Image Sensor process are presented. Measurements confirm the expected reduction of the readout noise with the number of samples down to deep sub-electron noise of 0.15rms e-, demonstrating the charge transfer operation from the pinned photodiode and the single photon counting operation when the sensor is exposed to light. The article also discusses new testing strategies employed for its operation and characterization.







Sunday, March 10, 2024

Job Postings - Week of 10 March 2024

Qualcomm

ADAS camera Engineer

Farnborough, UK

Link

Onsemi

Product Engineer

Meridian, Idaho, USA

Link

University of Warwick

Towards Silicon Photonics Based Gas Sensors

Coventry, UK

Link

Johnson & Johnson

Sr. Manager Visualization Hardware

Santa Clara, CA, USA

Link

NASA

Development of infrared detectors and focal plane arrays for space instruments

Pasadena, CA, USA

Link

Apple

Hardware Sensing Systems Engineer

Cupertino, CA, USA

Link

Sony

Software Engineer/Researcher for Image Sensors

Tokyo, Japan

Link

Meta

Image Sensor Architect

Redmond, Washington, USA

Link

Queen Mary University

Silicon Detector Technician

London, England, UK

Link

Friday, March 08, 2024

Samsung defends AI editing on photos

From TechRadar: https://www.techradar.com/phones/samsung-galaxy-phones/there-is-no-such-thing-as-a-real-picture-samsung-defends-ai-photo-editing-on-galaxy-s24

"There is no such thing as a real picture": Samsung defends AI photo editing on Galaxy S24

Like most technology conferences in recent months, Samsung’s latest Galaxy Unpacked event was dominated by conversations surrounding AI. From two-way call translation to gesture-based search, the Samsung Galaxy S24 launched with several AI-powered tricks up its sleeve – but one particular feature is already raising eyebrows.

Set to debut on the Galaxy S24 and its siblings, Generative Edit will allow users to artificially erase, recompose and remaster parts of an image in a bid to achieve photographic perfection. This isn’t a new concept, and any edits made using this generative AI tech will result in a watermark and metadata changes. But the seamlessness with which the Galaxy S24 enables such edits has understandably left some Unpacked-goers concerned.

Samsung, however, is confident that its new Generative Edit feature is ethical, desirable and even necessary in today’s misinformation-filled world. In a revealing interview with TechRadar, Samsung’s Head of Customer Experience, Patrick Chomet, defended the company’s position on AI and its implications.

“There was a very nice video by Marques Brownlee last year on the moon picture,” Chomet told us. “Everyone was like, ‘Is it fake? Is it not fake?’ There was a debate around what constitutes a real picture. And actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. [...] You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene – is it real? Or is it all filters? There is no real picture, full stop.”
“But still, questions around authenticity are very important,” Chomet continued, “and we [Samsung] go about this by recognizing two consumer needs; two different customer intentions. Neither of them are new, but generative AI will accelerate one of them.

“One intention is wanting to capture the moment – wanting to take a picture that’s as accurate and complete as possible. To do that, we use a lot of AI filtering, modification and optimization to erase shadows, reflections and so on. But we are true to the user's intention, which was to capture that moment.

“Then there is another intention, which is wanting to make something. When people go on Instagram, they add a bunch of funky black and white stuff – they create a new reality. Their intention isn’t to recreate reality, it’s to make something new. So [Generative Edit] isn’t a totally new idea. Generative AI tools will accelerate that intention exponentially in the next few years [...] so there is a big customer need to distinguish between the real and the new. That’s why our Generative Edit feature adds a watermark and edits the metadata, and we’re working with regulatory bodies to ensure people understand the difference.”

On the subject of AI regulation, Chomet said that Samsung "is very aligned with European regulations on AI," noting that governments are right to express early concerns around the potential implications of widespread AI use.

"The industry needs to be responsible and it needs to be regulated," added Chomet, noting that Samsung is actively working on that. "Our new technology is amazing and powerful – but like anything, it can be used in good and bad ways. So, it’s appropriate to think deeply about the bad ways.”

As for how Generative Edit will end up being used on Samsung's new Galaxy phones, only time will tell. Perhaps the feature will simply help average smartphone users (i.e. those unfamiliar with Photoshop) get the photos they really want, rather than facilitate mass photo fakery. Indeed, it still remains to be seen whether generative AI tech as a whole will be a benefit or a hindrance to society as we know it.


Wednesday, March 06, 2024

GPixel on the verge of IPO?

From: http://www.myzaker.com/article/65d3ce24b15ec01a56438179

(Translated with Google Translate)

...

Against the backdrop of an improving market, Changchun Changguangchenxin Microelectronics Co., Ltd. (hereinafter referred to as "Changguangchenxin"), a domestic company specializing in CMOS image sensors, has recently launched its IPO application on the Shanghai Stock Exchange Science and Technology Innovation Board. to the inquiry stage.

In this IPO, Changguang Chenxin plans to raise 1.557 billion yuan, and the funds are planned to be invested in the research and development and industrialization projects of CMOS image sensors in different directions, including the field of machine vision, scientific instruments, and professional imaging. At the same time, funds are also planned to be invested in high-end CMOS image sensor R&D center construction projects and to supplement working capital.

However, in recent years, Changguangchenxin's net profit has turned from profit to loss during the reporting period. Moreover, as a company that wants to be listed on the Science and Technology Innovation Board, Changguangchenxin's R&D expense rate has been decreasing year by year, and the detailed list of R&D expenses has been focused on by the Shanghai Stock Exchange. 

...

Monday, March 04, 2024

Andes and MetaSilicon collaborate on automotive CIS

From Yahoo Finance news:

Andes Technology and MetaSilicon Collaborate to Build the World’s First Automotive-Grade CMOS Image Sensor Product Using RISC-V IP SoC

Hsinchu Taiwan, Feb. 22, 2024 (GLOBE NEWSWIRE) -- RISC-V IP vendor Andes Technology and edge computing chip provider MetaSilicon jointly announced that the MetaSilicon MAT Series is the world's first automotive-grade CMOS image sensor series using RISC-V IP SoC, using Andes' AndesCore™ N25F-SE processor. They are designed in accordance with the ISO26262 functional safety standard to achieve ASIL-B level and follow the AEC-Q100 Grade 2 to achieve a high level of safety and reliability. And by using technologies such as HDR, advanced imaging can be achieved in a simple, economical, and efficient system. They not only address the effects of high dynamic range, high sensitivity, and high color reproduction, but also meet the application requirements of ADAS decision-making.

The N25F-SE from Andes Technology is a 32-bit RISC-V CPU core that can support the standard IMACFD instruction set, which includes an efficient integer instruction set and a single/double precision floating point operation instruction set. The N25F-SE's high-efficiency five-stage pipeline achieves a good balance between high operating frequency and a streamlined design. It also has rich configurable options and flexible interface configuration, which greatly simplify the SoC development. In addition, the N25F-SE has obtained the ISO 26262 ASIL-B full compliance certification, which enables the image sensor chip to meet the vehicle-level safety requirement. For the development of MetaSilicon's automotive-grade chips, the N25F-SE and its safety package provide a good fit CPU solution and together with Andes’ technical support shorten the chip development time significantly.

MetaSilicon has first-class innovative R&D capabilities and has developed several cutting-edge technologies including LOFIC (Lateral Overflow Integration Capacitor) + DCG (Dual Conversion Gain) HDR (High-Dynamic Range), which meet the high-quality image requirements for smart car vision applications. The MAT Series 1MP CMOS image sensor chip has low power consumption and high dynamic range (HDR) characteristics. Its effective image resolution is 1280 H * 960 V, and it can support high dynamic range image output up to 60fps @120dB. The other MAT Series 3MP CIS has multiple capabilities such as low power consumption, ultra-high dynamic range (HDR), on-chip ISP, LFM, etc. Its effective image resolution is 1920 H * 1536 V, and can support up to 60fps frame rate, and the dynamic range can reach the industry-leading 140dB+. These chips can provide reliable high-quality image information for intelligent automotive applications.

"The N25F-SE provides a safety package, which includes a safety manual, safety analysis report and a development interface outline. The N25F-SE and its safety package are effective, high-performance and flexible automotive solutions. They can significantly reduce the time required to design automotive grade SoCs and to comply with the ISO 26262 standard", said Dr. Charlie Su, President and CTO of Andes Technology. "We are very pleased that N25F-SE's IP and safety package efficiently support MetaSilicon shorten the development time for its two automotive-grade chips. We also look forward to more cooperation between the two companies in the future to create more innovative products."

Jianhua Zheng, CTO of MetaSilicon said, “Among the various sensors used in automotive ADAS applications, visual image processing is particularly important. Once the image is not accurate and timely enough, it will directly lead to errors in the judgment of the back-end algorithm, so HDR performance requirements are extremely high. MetaSilicon's LOFIC+DCG HDR technology can achieve an ultra-high dynamic range of 140dB+ to meet practical application needs in the automotive ADAS field. We are honored to work closely with Andes Technology on two high-performance chips, using the world's first ISO 26262 certified RISC-V core N25F-SE that meets the functional safety standards. As a result, we can shorten the product development time and achieve functional safety goals."