java.lang.Object
io.github.jspinak.brobot.actions.actionExecution.manageTrainingData.ActionVectorOneHot
All Implemented Interfaces:
ActionVectorTranslation

@Component public class ActionVectorOneHot extends Object implements ActionVectorTranslation
This translation method uses one hot encoding to encode the type of action performed. A reason to use one hot encoding is that conceptually, different actions can be considered different categories (i.e. CLICK and DRAG) and should not be on the same scale. If they were on the same scale (i.e. CLICK = 0, DRAG = 1), a concept would exist of an action with the value 0.5, which does not make a lot of sense. Only basic actions that directly modify the GUI are included. Composite actions are not included. The following are not included: FIND, DEFINE, VANISH, GET_TEXT, CLASSIFY, CLICK_UNTIL. HIGHLIGHT is included because it will be a good first test of a GUI automation neural network. It should be easier for a neural network to produce correct highlights since all it needs to do is recognize the area highlighted and the size and color of the highlight. Operations should not include FIND operations. For example, clicking on an image requires a FIND operation. This operation would be converted to a vector after the coordinates to click are found. The Matches object is used to pass all information about the action. Matches contains the ActionsObject as well as information about the coordinates acted on and if the action succeeded. Actions that do not succeed won't be converted to vectors. Some operations that include FIND, when performed by a neural net, will no longer need FIND. This includes dragging an image from one location to another. Other operations, especially those that do not change the GUI environment, will require a FIND operation. This can be performed by a separate neural network.