Solution Overview

          <History of S pen’s load map and function>                

As the role of stylus pen, which was used in previous smartphone, was limited only in pointing and writing, usability was not high enough. Therefore, after releasing the Galaxy Note series, the function of S-PEN started to be developed since Note 2 in order to progress a step further.

Main function that was newly applied in Note 2 is air view which enables us to capture the part of area on screen as an image with S-PEN and preview all types of contents and detailed information by using easy clip and hovering technology that edit/save/share.
Pen selection and etc that enable us to select S-PEN GESTURE, test, and item which command certain features easily.

Starting from Note 3, we have maximized S-PEN’s usability through Air Command by bringing 5 functions that can be done with S-PEN.
Simply image was saved in the Easy Clip previously. However, in the Smart Select, text in the image can be extracted and be saved as a scrap book.
Moreover, Action Memo which recognizes writings of telephone, email, URL address and etc and interconnects corresponding information and Pen window which runs application by designating the window size.

In Note 4, Easy Clip was integrated with previous Smart Select. Accordingly, if the user selects the area, it gives the option to choose either it will save as image or analyze certain information to make the result as card view in order to make call/message/web browsing and etc feature to be continued smoothly. And 10 images that were selected can be saved temporarily.

Even though we have suggested various features to users, we have found out that the depth was too long and complicated which makes frequency of use low. Accordingly, we have removed complicated features from previous, but focused on the image edit and text analysis which were used often. A user wanted to use features that run rapidly. A screen off memo that makes rapid memo possible was added, and the Scroll Capture function that captures a screen top and bottom was added as well.

A new direction for pen was presented. We were able to find the possibility of Hover feature which is in state of afloat before touching the screen. At this time, pen’s hover status became enabled to convey important information rapidly and conveniently to users. Translation function which can translate any words at any screen was added, and the magnifier function which helps those who have bad visions to see clearly was also added. In the Smart Select part, GIF capture function which captures movement, not steeled screen, was added to progress into completed form.

<Flow of Translate application> 

The above figure is the flow chart of translate which is specialized in Hover feature. If a user operates hover event with S-pen on certain text after setting the language, translation function will be operated. Translate application is based on Service form. First, it requests to the analysis controller and searches android view tree to check if there is corresponding coordinate’s text information is exists or not.

At this time, if the text information exists, then the information can be considered as accurate, and the result text will be translated by using openApi.

However, if it is impossible to confirm the text information from view tree information, recognized result will be confirmed through Optical Character Reader Recognizer by sending the image information. Afterwards, it will go through same procedure for the recognized test. Due to the limitation of usability that translation is provided based on the standards of Hover area, we are planning an additional interface other languages that we cannot currently support.