Отрывок: e. it does not change the disposi- tion of the A columns inside a group {(i–1)*n+k, k = 1, n}, i=1, n. , 1 ,i j n i j B B , , 1 1 , 1 ( , ) (1,1) ( 1 )j n i n j n i i j n i j           B B B . From lemma 3 Bi,j=B(i+j)+(1,1)×(n+1–i–j), therefore, B is symmetric. From lem- ma 2 Bi,j are symmetric and thus AC is also symmetric. ◄ Theorem 1.  =* (or CAC =AT. Note that C = C–1, therefore AT is also a matrix of  in basis < x...
Полная запись метаданных
Поле DC Значение Язык
dc.contributor.authorSheshkus, A.-
dc.contributor.authorChirvonaya, A.-
dc.contributor.authorMatveev, D.-
dc.contributor.authorNikolaev, D.-
dc.contributor.authorArlazarov, V.L.-
dc.date.accessioned2020-11-20 16:20:34-
dc.date.available2020-11-20 16:20:34-
dc.date.issued2020-10-
dc.identifierDspace\SGAU\20201110\86242ru
dc.identifier.citationSheshkus A, Chirvonaya A, Matveev D, Nikolaev D, Arlazarov VL. Vanishing point detection with direct and transposed fast Hough transform inside the neural network. Computer Optics 2020; 44(5): 737-745. DOI: 10.18287/2412-6179-CO-676.ru
dc.identifier.urihttps://dx.doi.org/10.18287/2412-6179-CO-676-
dc.identifier.urihttp://repo.ssau.ru/handle/Zhurnal-Komputernaya-optika/Vanishing-point-detection-with-direct-and-transposed-fast-Hough-transform-inside-the-neural-network-86242-
dc.description.abstractIn this paper, we suggest a new neural network architecture for vanishing point detection in images. The key element is the use of the direct and transposed fast Hough transforms separated by convolutional layer blocks with standard activation functions. It allows us to get the answer in the coordinates of the input image at the output of the network and thus to calculate the coordinates of the vanishing point by simply selecting the maximum. Besides, it was proved that calculation of the transposed fast Hough transform can be performed using the direct one. The use of integral operators enables the neural network to rely on global rectilinear features in the image, and so it is ideal for detecting vanishing points. To demonstrate the effectiveness of the proposed architecture, we use a set of images from a DVR and show its superiority over existing methods. Note, in addition, that the proposed neural network architecture essentially repeats the process of direct and back projection used, for example, in computed tomography.ru
dc.description.sponsorshipThis work was supported by the Russian Foundation for Basic Research (projects 18-29-26027 and 17-29-03161).ru
dc.language.isoenru
dc.publisherСамарский национальный исследовательский университет имени акад. С.П. Королеваru
dc.relation.ispartofseries44;5-
dc.subjectfast Hough transformru
dc.subjectvanishing pointsru
dc.subjectdeep learningru
dc.subjectconvolutional neural networksru
dc.titleVanishing point detection with direct and transposed fast Hough transform inside the neural networkru
dc.typeArticleru
dc.textparte. it does not change the disposi- tion of the A columns inside a group {(i–1)*n+k, k = 1, n}, i=1, n. , 1 ,i j n i j B B , , 1 1 , 1 ( , ) (1,1) ( 1 )j n i n j n i i j n i j           B B B . From lemma 3 Bi,j=B(i+j)+(1,1)×(n+1–i–j), therefore, B is symmetric. From lem- ma 2 Bi,j are symmetric and thus AC is also symmetric. ◄ Theorem 1.  =* (or CAC =AT. Note that C = C–1, therefore AT is also a matrix of  in basis < x...-
dc.classindex.scsti28.23.15-
Располагается в коллекциях: Журнал "Компьютерная оптика"

Файлы этого ресурса:
Файл Описание Размер Формат  
440508.pdfОсновная статья1.07 MBAdobe PDFПросмотреть/Открыть



Все ресурсы в архиве электронных ресурсов защищены авторским правом, все права сохранены.