4541

Signatures identify us to the degree of being the most genuine expression of our invisible or unconscious identity. Technology advances to the point that a small machine like the MAX78000 can help us find in the middle of hundreds or miles of small graphic aspects, surely unique, almost insignificant, the author behind a signature ... or no less important, that does not belong to him.

Signatures are still the traditional way of identifying the validity of a document in relation to the legitimacy of the author or authors who sign it. We find them in all the bureaucratic areas of the institutions that surround us: financial, judicial, educational administration, etc.

Graphological analysis appeals to graphic categories to determine the veracity of a signature. Among others, the forms, the initial and final features, the speed, the pressure, the continuity, etc. stand out. They are called "typical scriptural gestures" or "typical gestures" those features of the writing proper to each person. These may go unnoticed by most people, but for an expert they have great value in the process of identifying the signatory.

There are countless of these gestures, but the way each person writes and signs is their own, to the point that no two writings or signatures are the same.

The graphological analysis process to evaluate the authenticity of a signature uses millimeter grids, magnifying glasses of various diopters, digitizing tables, software, etc. and a great experience and professionalism for their expertise.

In the Webinar - MAX78000 Neural Network Accelerator from Elektor I had the possibility to observe how a machine that has a microphone and a camera can identify words and faces starting from the training of convolutional neural networks and with an exceptionally low energy consumption in the process of "inference".

Given this fascinating experience and the possibility of participating in the contest: “MAX78000 AI Design Contest (powered by Elektor) I proposed to my wife, an educator like me, and also a professional graphologist, to propose problems whose solution could be related to this technology. This is how this project arises on the identification of the "authenticity of a signature" that would be carried out by a "trained" machine (Deep Learning).

Getting started:

Following the steps described by Mathias Claussen in his post "Making Coffee with the MAX78000 and Some AI (Part 1): Getting Started" I installed Ubuntu 20.10 doing a Dual-boot with Windows 10 on my notebook (12GB RAM, i7 processor, motherboard graphics Nvidia GeForce 940mx).

I installed in Linux the driver packages for nVidia graphics cards (they allow the use of CUDA processing acceleration) and the packages for Pyenv (it generates containers to work with a version of Python without having dependency problems with other versions that may be installed in the system). Then with Pyenv we install Python 3.8.9 and configure the Git environment. Next we create a folder / home / user / Documents / Source / AI and inside we clone the "training" and "synthesis" folders from Github. These are synthetically the elementary steps to train convolutional neural networks in this operating system.

My first training was done with train_kws20_v3 as proposed by this author. It lasted 8:15 hours with a temperature of 78 degrees on the Nvidia graphics card. (see attached file: 2021.06.05-113902.rar)

 I also installed the Eclipse IDE on Windows 10 to import, compile and debug the samples proposed by MaximSDK.

The first tests with the MAX78000FTHR board are being carried out with the examples located in the C: \ MaximSDK \ Examples \ MAX78000 \ CNN folder. I tried with spoken word identifier kws20_demo importing and debugging it with Eclipse IDE (Windows 10) and also tried building the examples with MinWG in Widows 10; it worked correctly both ways (see video:  VIDEO - EXAMPLE KWS20 COMPRIMIDO.mp4 ).

FIGURE 1.

The connection of the TFT screen to the MAX78000FTHR is as follows

FIGURE 2

Another test performed that I compiled and debugged from the MinGW console were the examples extracted from the cats-dogs_demo folders (FIGURE 3 and VIDEO - EXAMPLE CATS AND DOG.mp4) and from faceid_evkit (VIDEO - FACEID.mp4).

FIGURE 3

One of the problems that I was presented with was the MAX78000 blocking that did not allow me to reprogram it when I had the kws20_demo loaded. To do this, it would create an empty text file called erase.act, then “drag and drop” on the DAPLINK drive and in this way the programming flash memory was erased. I am currently experimenting with faiceID.

Investigating the possibility of making the evaluation of signatures from a neural network trained to initially discern the "typical gestures" of a signature, I started a brief investigation around the recognition of faces and handwritten numbers.

Without going into detail about very theoretical speculations, I understand that the recognition starts from transforming graphic information captured by a digital camera into numerical data stored in matrices that represent vectors.This transformation is selective, since it is done as a Filter (embedding) of outstanding aspects of the face that are usually common to all human beings, although they differ slightly from each other. Thus arises the "numerical version" of thousands of faces that are saved digitized in the memories of computers to later generate the models that serve as a reference for training deep neural networks. Discernment comes from being able to see the difference between vectors (set of numbers) and precision will have to do with the method applied to establish the difference (for example, Euclidean Distance).
.

To carry out this project, I proposed to go through two stages considering this progression given the increasing level of complexity, both theoretical and practical:

1- Experiment with the different examples proposed in MaximSDK to learn about the MAX78000 platform and the possibilities of solving Deep Learnig problems (particularly with examples that work with handwritten graphic signs such as “Mnist”).

2- Propose the generation of a model that extracts from the signatures each of the “type” gestures to later evaluate the level of inference obtained with respect to one or more other signatures considered valid.

Bibliographic references:

https://www.scperitocaligrafo.com/ejemplos-gestos-tipo-grafologia/
https://e-graphing-plus.com.ar/
https://www.elektormagazine.com/articles/making-coffee-max78000-ai
https://github.com/MaximIntegratedAI
https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/
https://sefiks.com/2020/05/01/a-gentle-introduction-to-face-recognition-in-deep-learning/