From WebOS Internals
Revision as of 21:27, 7 May 2010 by Heinervdm (talk | contribs) (→‎Screen format)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


Cypress CY8MRLN chipset

They are using the CY8MRLN chipset from Cypress in the Palm Pre which is connected over SPI to the core. The CY8MRLN provides some quite interesting features already included in the driver provided by the kernel source:

  • Wake-on-touch (configurable from userspace through ioctl's; see include/linux/spi/cy8mrln.h)
  • firmware loading (the CY8MRLN provide's a [[1]])
  • touchscreen data is forwarded from kernel to usermode interface on /dev/touchscreen

dump data from touchscreen

There are some userland tools from palm in /usr/bin which helps you to dump the data ariving on /dev/touchscreen.

ts-lib implementation

For using the touchscreen with a different os like SHR or FSO we have to implement a plugin for ts-lib which decodes the data from the touchscreen ariving through /dev/touchscreen. Palm uses therefore there own implementaton /usr/bin/hidd which also provides plugin for other input system like the keyboard or the system buttons.

Current problem with the ts-lib plugin is that the structure of the data arriving on /dev/touchsreen is not known. There is another driver for a different touchscreen chipset for android available on the cypress homepage ( which includes decoding of data from the touchscreen chipset. I assume that the data from the chipset used in the Palm Pre is nearly the same.

The format is totaly different to the one on that page. --Heinervdm 20:14, 7 May 2010 (UTC)

First development is here:

Screen format

The screen is devided in 70 (7x10) fields + 7 fields for the gesture area. The position of the measurement points can be seen if you look in an angle of ~45° on the touchscreen, there are some small silver points.

Those fields report the the capacity.

The potential of the touchscreen is different for every device and the zero values are constant but aren't the same for every field (they increase from left to right). To get useable values one has to substract the current value of a field from the zero value

Changes can be seen even in a distance of 5mm above the screen.

To get acurate positions from these 70 measurement points on has to interpolate. Because a finger is bigger then a single field, at least 4 fields, in the middle of the screen, measure something. You have to correct the position of the measurement point by a certain amount according the values of the neighbour fields.


The dataset has 167 byte and is structured like the following struct:

<source lang="c"> struct ts_frame {

   uint16_t frame_start; // constant:
   uint16_t field[77];   // contains the intensity of the 7*11 fields
   uint16_t frame_end;   // frame end indicator; always 0xffff
   uint8_t seq_nr1;      // incremented if seq_nr0 = scanrate
   uint16_t seq_nr2;     // incremeted if seq_nr1 = 255
   uint8_t unknown[4];   // timestap?
   uint8_t seq_nr0;      // incremented: from 0 to scanrate
   uint8_t null_value;   // \0

}; </source>