Monday, January 26, 2015

Temporal Pixel Multiplexed Imaging

Gil Bub, Oxford, UK University Research Lecturer presents fast and high resolution imaging approaches, one of them being Temporal Pixel Multiplexing (TPM):

Sunday, January 25, 2015

Rumor: Microsoft New ToF Camera

I was told that the strange medallion that Alex Kipman wore on his jacket during the HoloLens presentation is in fact the newest version of Microsoft ToF camera, designed in Israel:


The same medallion appears in the Dallas Morning News and the Verge:

Friday, January 23, 2015

Omnivision Proposes Feedback to Reduce FD Voltage Swing

Omnivision patent application US20150015757 "Image sensor pixel cell readout architecture" by Trygve Willassen says "The sum of the floating diffusion voltage swing and pinning voltage typically limits the supply voltage for the active pixel sensor to a minimum of 2.5-3 Volts. However, there is a continuing demand for active pixel sensors with a supply voltage of less than 2.5-3 Volts as the demands for further miniaturization of active pixel sensors increase." So, the proposal is to reduce the FD swing by a capacitive feedback through Cfb 116:

Thursday, January 22, 2015

Reports: Microsoft AR HoloLens Controlled by 4 Cameras

Microsoft presents a augmented reality HoloLens headset mixing the real world with computer data, featuring in a Youtube video:



Another Youtube video shows Microsoft HoloLens demo driven by gesture control:



Engadget says "There are at least four cameras or sensors on the front of the HoloLens prototype." Mashable writes "Based on a quick look at the headset, it appears to have four front-facing cameras that could be used to detect the positions of the user's hands as she interacts with holographic objects."

Another Mashable article says "The augmented-reality headgear is full of sensors, but the most powerful one may be the 3D depth sensor. It’s the same one you’ll find in the Kinect and it is capable of building a detailed 3D mesh map of a room and everything in it. Once HoloLens knows what’s in the room, it can essentially drape 3D imagery over it so that it looks as if the digital objects and textures are part of the same environment as real world walls and furniture."

Microsoft HoloLens

Another Microsoft video briefly shows HoloLens internal design:

HoloLens internals

Wired reports that "the headset is still a prototype being developed under the codename Project Baraboo, or sometimes just “B.” Hololens chief inventor, Alex Kipman has "been working on this pair of holographic goggles for five years. No, even longer. Seven years, if you go back to the idea he first pitched to Microsoft, which became Kinect."

Seattle Times reports "The company isn’t saying how much a HoloLens will cost or when it will be broadly available but it’s likely to cost less than a high-end computer. Chief Executive Satya Nadella said it’s intended to be accessible to consumers as well as business users, though the latter seems to be a primary target."

Wednesday, January 21, 2015

Leap Motion CTO on Future Human-Machine Interfaces

David Holz, Leap Motion CTO, presents his vision on future human-machine interaction in this Youtube video:

Image Sensors London Conference

Image Sensor conference to be held on March 17-19, 2015 in London, UK, has published its final agenda:

Workshop – If you can’t make it global, then let the shutter roll!
Dr Albert Theuwissen, Founder, Harvest Imaging

This workshop will deal with needs, advantages, disadvantages and characteristics of several shutter that are being used with CMOS image sensors. Most devices on the market do have a rolling shutter, but there is a need to have a global shutter like it was the case for CCDs.

Fast moving objects in the scene do deform when read out in rolling shutter mode. But switching from a rolling shutter to a global shutter is not straight forward, for sure not in the case today's characteristics of the rolling shutter devices need to be maintained (correlated-double sampling, anti-blooming, electronic shuttering, low dark current). Two global shutter types are introduced in commercial products: storage of the signals in the charge domain or in the voltage domain.

The first part of the workshop will concentrate on the basic properties of the two shutter types, as well as on the effect the rolling and global shutter have on the imager characteristics. In the second part of the workshop two existing solutions, resp. of the global shutter in the charge domain as well as the global shutter in the voltage domain will be analyzed and compared with each other. Finally a look into the future will close the workshop.


Technology push, or market pull – perspectives on maximising opportunities from technology development
Giora Yahav, General Manager - Advanced Imaging Technology Organisation, Microsoft
  • So you have a great idea, what now?
  • Finding a niche, or letting the niche find you
  • Where next in 3D imaging, technology push or market pull in 2015 and beyond
Patenting an idea
Daniel Doswald, Examiner, EPO
  • Familiarize yourself with IP - How to search for patents
  • Is my idea patentable?
  • Disclosure and scope of protection of a patent
  • Inside the mind of a patent examiner
  • Challenging a competitor’s patent in an early stage
Protecting your IP in today’s fast moving image sensor marketplace
Keith Beresford, European Patent Attorney, Beresford & Co
  • Quick guide to patents and why they are important for your business
  • Best practice for integrating IP protection into your R&D workflow and business strategy
  • Experiences from the front lines – what happens if you draft your specification poorly
  • Combatting patent trolls – what you can do to prevent and deter this activity
Beyond Bayer rolling shutter CMOS image sensors
Eiichi Funatsu, Senior Director, OmniVision
  • CMOS image sensor market and development trend
  • Super high sensitivity by RGBC solution
  • RGB-IR for man-machine interface
  • Global shutter for machine vision
Stacked image sensors - the new image sensor standard
Paul Enquist, CTO, Ziptronix
  • State of play with chip stacking
  • Comparison of stacking using TSV's vs. hybrid bonding
  • Technical benefits of the stacking approach
The role of ADCs in imaging applications - novel approaches
Prof Ángel Rodríguez-Vázquez, R&D Director, AnaFocus
  • CIS technology trends and the need for ADCs in emerging applications
  • ADC options and how to implement
  • Limitations and areas for further study
Optical filter glass for image sensors
Prof Steffen Reichel, Development / Application, Advanced Optics, Schott
  • Challenges for IR cut filters with BSI chips
  • Evaluation of IR filers glass materials and lens design
  • Recommendation for materials specifications and performance parameters for plano-plano IR filters
Image sensor device testing needs with high yield performances
Satoshi Takahashi, Senior Engineer, Advantest
  • Growing manufacturing and demand requirements drive the need for test systems
  • The key factors to realize high yield device testing for CIS device manufacturing
  • Architecture and performances of test system capable of measuring 64 devices simultaneously at up to 2.5Gbps.
Security imaging today and tomorrow
Dr Anders Johannesson, Senior Expert Engineer, Axis Communications
  • Historical development of security cameras from grainy images to self-contained systems with 4K resolution and beyond
  • Defining the imaging challenges in security applications
  • Key aspects for further improvement in security cameras
Advances in cooled/uncooled IR sensors for security applications
Claire Valentin, VP Marketing, Sofradir

Image sensors for low light levels with active imaging features
Pierre Fereyre, Image Sensor Design, and Gareth Powell, Strategic Marketing Manager, e2v
  • Five transistor pixel CMOS sensor for range-gated active imaging to extend usability of intelligent cameras in the most difficult conditions
  • Advanced state of the art image sensors and embedded features, with emphasis on size, weight, power and cost benefits
  • New applications that are enabled
Image sensor planetary space mission
Dr Harald Michaelis, Head of Department, DLR Institute of Planetary Research
  • Image capture aims of the Rosetta mission
  • Camera and sensor specifications, performance parameters, design considerations
  • Results from the mission
  • Future planetary imaging plans and aspirations and next generation camera design
Future of computational imaging
Raji Kannan, Founder, LensBricks
  • Hardware developments
  • Advanced in processing and impact at a system level
  • Market opportunities
Image Fusion - how to make best use of broad spectrum data
David Connah, Research Associate in Visual Computing, University of Bradford / CoFounder, Spectral Edge
  • Challenges in fusing multiple data channels into one single image for display
  • Mapping the contrast (structure tensor) of a multi-channel image is mapped exactly to a 3-channel gradient field
  • The problem of mapping N-D inputs to 3-D (RGB) outputs
  • Applications in hyperspectral remote sensing, fusion of colour and near-infrared images and colour visualisation of MRI Diffusion-Tensor images
Lensless ultra-miniature computational sensors and imagers: using computing to do the work of optics
Dr David Stork, Fellow and Research Director of the Computational Sensing and Imaging Group, Rambus Labs
  • Computational optical sensors and imagers that do not rely on traditional refractive or reflective focusing
  • Computing images from raw photodiode signals
  • Imager performance, features, and applications
High performance smart automotive camera system
Tarek Lulé, Automotive Camera System Consultant, ST Microelectronics
  • Automotive camera needs: power, size, ambient requirements
  • High sensitivity, hdr pixel architecture: choice of pixel and resulting performance
  • Imager system architecture: 1.3mpix raw image sensor with 130db dynamic range and excellent low light performance
  • Automotive hdr image processor: very low power companion chip for hdr colorization, with embedded video analytics and automotive interfaces
Depth sensing solutions for consumer electronics
Markus Rossi, Chief Innovation Officer, Heptagon Advanced MicroOptics
  • Drivers for optical depth sensing in consumer applications and review of current technologies
  • Hardware solutions for depth sensing concepts
  • Case examples
Cameras in medical applications - novel applications and innovative camera designs
Thomas Ruf, Manager Sales and New Business, First Sensor E²MS
  • Medical cameras come in all shapes and sizes – what challenge does this present?
  • Examples of innovative camera design for X-ray apparatus, computer tomographs and stereo endoscopes
  • Enabling technologies for design and assembly of complex and unique camera systems
High end camera systems for multi imager applications
Marcus Verhoeven, Managing Director, aSpect Systems
  • Implementing hardware, software, mechanics, temperature control, optics and imaging technology for high performance imaging applications
  • Example 1: Per pixel energy dispersive X-Ray architecture with a resolution of 400x400 pixels (250µm), a frame rate of 10.000 FPS which supports an energy resolution of 1keV in a range from 3-220keV
  • Example 2: A new detector for proton therapy of cancer based on a stack of 12 crossed strip detectors and 24 layers of CMOS imagers
An insider's outside view of image sensor development past, present and future
Jed Hurwitz, Technologist, Advanced Measurement Systems, Analog Devices
  • The beginning of CMOS
  • The glory years!
  • Progress viewed from the outside
  • Perspectives on the future of digital imaging

Tuesday, January 20, 2015

Aptina Proposes In-Pixel Ramp for ADC

Aptina's patent application US20150009379 "Imagers with improved analog-to-digital circuitry" by Hai Yan and Kwang-bo Cho adds a capacitor on FD of the pixel to deliver ramp for the ADC:

"Conventional ramp circuitry applies the ramp voltage to a capacitor at the input of the comparator of the sample-and-hold circuitry. However, such an arrangement requires a high pixel supply voltage in order to support a wide range of pixel output signals sampled onto the capacitor (e.g., sufficient to support the well capacity of the pixel). The capacitor is required to have a capacitance sufficient to satisfy noise requirements such as a maximum amount of thermal (k*T/C) noise, which in turn requires the ramp circuitry to have high driving capability for driving the large capacitor. Conventional capacitors used in ramp circuitry can be hundreds of femtofarads (fF). The large sample-and-hold capacitor also occupies valuable circuit area of the imager. In addition, the pixel array is typically read by scanning pixel rows in sequential order. This sequential scanning can lead to row-dependent noise in the image output signals of the pixel array. For example, transient noise in a power supply signal is consistent throughout the pixels of a row but varies between rows. It would therefore be desirable to provide imagers with improved pixel readout and analog-to-digital conversion capabilities."

Aptina Proposes Discontinuous Exposure Mode

Aptina's patent application US20150009375 "Imaging systems with dynamic shutter operation" by Gennadiy Agranov, Sergey Velichko, and John Ladd presents the artifacts issue:

"In conventional imaging systems, image artifacts may be caused by moving objects, moving or shaking camera, flickering lighting, and objects with changing illumination in an image frame. Such artifacts may include, for example, missing parts of an object, edge color artifacts, and object distortion. Examples of objects with changing illumination include light-emitting diode (LED) traffic signs (which can flicker several hundred times per second) and LED stop lights of modern cars.

While electronic rolling shutter and global shutter modes produce images with different artifacts, the root cause for such artifacts is common for both modes of operation. Typically, image sensors acquire light asynchronously relative to the scenery being captured. This means that portions of an image frame may not be exposed for part of the frame duration. This is especially true for bright scenery when integration times are much shorter than the frame time used. Zones in an image frame that are not fully exposed to dynamic scenery may result in object distortion, ghosting effects, and color artifacts when the scenery includes moving or fast-changing objects. Similar effects may be observed when the camera is moving or shaking during image capture operations.
"

The proposed solution is:

"Each image pixel in a pixel array may include a shutter element for controlling when the photosensitive element acquires charge. For example, when a pixel's shutter element is “open,” photocurrent may accumulate on the photosensitive element. When a pixel's shutter element is “closed,” the photocurrent may be drained out from the pixel and discarded.

The shutter elements may be operated dynamically by being opened and closed multiple times throughout the duration of an imaging frame. Each cycle of dynamic shutter operation may include a period of time when the shutter is open and a period of time when the shutter is closed. At the end of each cycle, the charge that has been acquired on the photosensitive element during the cycle may be transferred from the photosensitive element to a pixel memory element. By repeating this sequence multiple times, the charge accumulated on the pixel memory element may represent the entire scenery being captured without significantly unexposed “blind” time spots.
"

Toshiba Proposes Superlattice Underneath Transfer Gate

Toshiba patent application US20150008482 "Semiconductor device and manufacturing method thereof" by Motoyuki Sato says that making SiGe superlattice under transfer gate can drastically reduce the influence of SiO2/Si interface traps that potentially can capture photoelectrons during the transfer. The dark current and white pixel defects are also said to be reduced:

Monday, January 19, 2015

Counter-Drones Need 12 Cameras for Interception

Popular Science: With all the privacy concerns associated with drone usage, Rapere.io offers a counter-drone. "Simply take it outside, put it on the ground, and press the GO button. The Rapere will take off, while at the same time scanning the sky for drones. It can tell the difference between a bird and a drone, and will fly over top of any drone within range, then disable it." Rapere strikes with string by lowering a tangle line onto the rotors of its target:


Twelve 90fps VGA cameras pointing in every direction are used to guide the drone to it's target - hovering above the free floating target drone. Rapere says that detecting a free floating object which is well illuminated and far from any other visible object is easy. The counter-drone can burn lots of watts on the onboard computer, because of the short intercept flight time, which also simplifies the work.

With drones and anti-drones both using many cameras, it appears that image sensor market is set for a new explosive growth.