All convolutions within a dense block are ReLU-activated and use batch normalization. Channel-smart concatenation is just attainable if the peak and width Proportions of the data remain unchanged, so convolutions inside a dense block are all of stride one. Pooling levels are inserted between dense blocks for further dimensionality https://financefeeds.com/best-copyright-to-buy-5-cheap-cryptos-with-high-return-potential/