Search

JP-2022533206-A5 -

JP2022533206A5JP 2022533206 A5JP2022533206 A5JP 2022533206A5JP-2022533206-A5

Dates

Publication Date
20230529
Application Date
20200520

Description

Additional and other purposes, features, and advantages of this disclosure are described in the detailed description, figures, and claims. The present invention provides, for example, the following: (Item 1) A neural network in the multitasking deep learning paradigm for machine vision, An encoder comprising a first layer, a second layer, and a third layer, The aforementioned first layer comprises a first layer unit, The first hierarchical unit comprises one or more first unit blocks, The second layer receives the first layer output from the first layer in one or more second layer units within the second layer. A second-tier unit comprises one or more second-tier blocks, The aforementioned third layer receives the second layer output from the aforementioned second layer in one or more third layer units within the aforementioned third layer. A third-tier unit comprises one or more third-tier blocks. Encoder and A decoder, wherein the decoder is operationally coupled to the encoder in order to receive an encoder output from the encoder, One or more loss function layers, wherein the one or more loss function layers are configured to backpropagate one or more losses in order to train at least the encoder of the neural network in the deep learning paradigm. A neural network equipped with [a specific feature]. (Item 2) The neural network according to item 1, wherein one or more first unit blocks within the first hierarchical unit each comprises a convolutional layer, followed logically by a batch normalization layer, and further logically by a scaling layer, and each or more first unit blocks further comprises a rectifier linear unit logically following the scaling layer. (Item 3) The aforementioned second layer comprises a first second layer unit and a second second layer unit, The first second-tier unit receives the first-tier output from the first tier and comprises a first second-tier first unit block and a second second-tier first unit block. Both the first second-level first unit block and the second second-level first unit block each comprise a batch normalization layer, followed by a scale layer, and further logically followed by a rectifier linear unit. The batch normalization layer within the first second-level first unit block logically follows the first convolutional layer, The batch normalization layer within the second second-level first unit block described above logically follows the second convolutional layer, The first convolutional layer is different from the second convolutional layer. The neural network described in item 1. (Item 4) The second second-tier unit comprises a first second-tier second-tier unit block that receives an output connected from the second second-tier first-tier unit block and the first-tier output, a second second-tier second-tier unit block, and a third second-tier second-tier unit block. The first second-level second-unit block, the second second-level second-unit block, and the third second-level second-unit block each comprise the batch normalization layer, followed by the scale layer, and further logically followed by the rectifier linear unit, and the batch normalization layer in the first second-level second-unit block logically follows the second convolution layer, The batch normalization layer within the second second-level second-unit block logically follows the first convolutional layer, The batch normalization layer within the third second-level second-unit block logically follows the second convolutional layer, The third second-level second unit block described above is configured to generate a second-level output. The neural network described in item 3. (Item 5) The first-level output generated by the first level is concatenated with the second-level output generated by the second level and provided to the third level as a third-level input. The third layer comprises a first third layer unit and a second third layer unit, The first third-tier unit comprises a plurality of third-tier first-tier unit blocks located at individual first-tier hierarchical levels, At least some of the plurality of third-level first unit blocks comprises different extended convolutional layers corresponding to one or more first expansion coefficients. The neural network described in item 1. (Item 6) The neural network according to item 5, wherein the second third-tier unit comprises a plurality of third-tier second-unit blocks located at individual second-unit hierarchical levels, at least some of the plurality of third-tier second-unit blocks comprising a plurality of extended convolutional layers corresponding to one or more second extension coefficients, and the plurality of third-tier first-unit blocks and the plurality of third-tier second-unit blocks comprising at least one individual extended convolutional layer and a plurality of individual residual blocks for training at least the encoder of the neural network in the deep learning paradigm. (Item 7) The neural network ac