About 248,000 results
Open links in new tab
  1. What does 1x1 convolution mean in a neural network?

    1x1 conv creates channel-wise dependencies with a negligible cost. This is especially exploited in depthwise-separable convolutions. Nobody said anything about this but I'm writing this as a …

  2. What is the difference between Conv1D and Conv2D?

    Jul 31, 2017 · I will be using a Pytorch perspective, however, the logic remains the same. When using Conv1d (), we have to keep in mind that we are most likely going to work with 2 …

  3. neural networks - Difference between strided and non-strided ...

    Aug 6, 2018 · conv = conv_2d (strides=) I want to know in what sense a non-strided convolution differs from a strided convolution. I know how convolutions with strides work but I am not …

  4. How to calculate the Transposed Convolution? - Cross Validated

    Sep 3, 2022 · Studying for my finals in Deep learning. I'm trying to solve the following question: Calculate the Transposed Convolution of input $A$ with kernel $K$: $$ A=\begin ...

  5. How do bottleneck architectures work in neural networks?

    We define a bottleneck architecture as the type found in the ResNet paper where [two 3x3 conv layers] are replaced by [one 1x1 conv, one 3x3 conv, and another 1x1 conv layer]. I …

  6. Difference between Conv and FC layers? - Cross Validated

    Nov 9, 2017 · What is the difference between conv layers and FC layers? Why cannot I use conv layers instead of FC layers?

  7. What are the advantages of FC layers over Conv layers?

    Sep 23, 2020 · I am trying to think of scenarios where a fully connected (FC) layer is a better choice than a convolution layer. In terms of time complexity, are they the same? I know that …

  8. Understanding the output shape of the following YOLO network

    Dec 24, 2022 · Below you can see a convolutional network with 24 convolutional layers. I am trying to understand the shape of the network. Given the input image with shape 448x448x3, …

  9. What does 1x1 convolution mean in a neural network? (v2)

    Dec 21, 2018 · Most of the answers to that question indicated how 1x1 conv layers are used for dimensionality reduction (or in general, a dimensionality change) in the filter dimension.

  10. Why does residual block in resnet shown as skipping not just 1 …

    Apr 6, 2020 · Why is that when the diagram is talking about only skipping 1-layer, it's showing skip-connection after relu? Isn't that part of the second conv+relu layer? I've seen the …