Proceeding of: 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21-26 June 2020 ; This paper concerns the maximum coding rate at which a code of given blocklength can be transmitted with a given block-error probability over a non-coherent Rayleigh block-fading channel with multiple transmit and receive antennas (MIMO). In particular, a high-SNR normal approximation of the maximum coding rate is presented, which is proved to become accurate as the signal-to-noise ratio (SNR) and the number of coherence intervals L tend to infinity. ; C. Qi and T. Koch have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No. 714161). T. Koch has further received funding from the Spanish Ministerio de Economía y Competitividad under Grants RYC-2014-16332 and TEC2016-78434-C3-3-R (AEI/FEDER, EU)
An earlier version of this paper was presented in part at the 2019 IEEE International Symposium on Information Theory [DOI:10.1109/ISIT.2019.8849751], in part at the 2020 International Zurich Seminar on Information and Communication [DOI:10.3929/ethz-b-000403243], and in part at the 2020 IEEE International Symposium on Information Theory [DOI:10.1109/ISIT44484.2020.9174091]. ; This paper considers a Gaussian multiple-access channel with random user activity where the total number of users ℓn and the average number of active users kn may grow with the blocklength n . For this channel, it studies the maximum number of bits that can be transmitted reliably per unit-energy as a function of ℓn and kn . When all users are active with probability one, i.e., ℓn=kn , it is demonstrated that, if kn is of an order strictly below n/logn , then each user can achieve the single-user capacity per unit-energy (loge)/N0 (where N0/2 is the noise power) by using an orthogonal-access scheme. In contrast, if kn is of an order strictly above n/logn , then the users cannot achieve any positive rate per unit-energy. Consequently, there is a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate per unit-energy is infeasible. It is further demonstrated that orthogonal-access schemes in combination with orthogonal codebooks, which achieve the capacity per unit-energy when the number of users is bounded, can be strictly suboptimal. When the user activity is random, i.e., when ℓn and kn are different, it is demonstrated that, if knlogℓn is sublinear in n , then each user can achieve the single-user capacity per unit-energy (loge)/N0 . Conversely, if knlogℓn is superlinear in n , then the users cannot achieve any positive rate per unit-energy. Consequently, there is again a sharp transition between orders of growth where interference-free communication is feasible and orders of growth where reliable communication at a positive rate is infeasible that depends on the asymptotic behaviors of both ℓn and kn . It is further demonstrated that orthogonal-ac. ; The work of Jithin Ravi was supported by the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Program under Grant 714161. The work of Tobias Koch was supported in part by the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Program under Grant 714161; and in part by the Spanish Ministerio de Ciencia e Innovación under Grant RYC-2014-16332, Grant TEC2016-78434-C3-3-R (AEI/FEDER, EU), and Grant PID2020-116683GB-C21 / AEI / 10.13039/501100011033.
Proceeding of: 55th Asilomar Conference on Signals, Systems, and Computers, October 31 - November 3, 2021 Pacific Grove, California, USA. ; In the emerging Internet of Things, a massive number of devices may connect to one common receiver. Consequently, models that study this setting are variants of the classical multiple-access channel where the number of users grows with the blocklength. Roughly, these models can be classified into three groups based on two criteria: the notion of probability of error and whether users use the same codebook. The first group follows the classical notion of probability of error and assumes that users use different codebooks. In the second group, users use different codebooks, but a new notion of probability of error called per-user probability of error is considered. The third group considers the per-user probability of error and that users are restricted to use the same codebook. This group is also known as unsourced random access. For the first and second groups of models, scaling laws that describe the capacity per unit-energy as a function of the order of growth of users were characterized by Ravi and Koch (arxiv.org/abs/2012.10350). In this paper, we first review these results. We then present scaling laws for the third group of models, i.e., unsourced random access. ; J. Ravi and T. Koch have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No. 714161). T. Koch has further received funding from the Spanish Ministerio de Ciencia e Innovación under Grant PID2020-116683GB-C21 / AEI / 10.13039/501100011033.
Proceeding of: 2020 IEEE International Symposium on Information Theory (ISIT), Los Angeles, CA, USA, 21-26 June 2020 ; We consider a Gaussian multiple-access channel with random user activity where the total number of userslₙ and the average number of active users kₙ may be unbounded. For this channel, we characterize the maximum number of bits that can be transmitted reliably per unit-energy in terms of lₙ and kₙ . We show that if kₙ log lₙ is sublinear in n, then each user can achieve the single-user capacity per unit-energy. Conversely, if kₙ log lₙ is superlinear in n, then the capacity per unit-energy is zero. We further demonstrate that orthogonal-access schemes, which are optimal when all users are active with probability one, can be strictly suboptimal. ; J. Ravi and T. Koch have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No. 714161). T. Koch has further received funding from the Spanish Ministerio de Economía y Competitividad under Grants RYC-2014-16332 and TEC2016-78434-C3-3-R (AEI/FEDER, EU).
In 1959, Rényi proposed the information dimension and the d-dimensional entropy to measure the information content of general random variables. This paper proposes a generalization of information dimension to stochastic processes by defining the information dimension rate as the entropy rate of the uniformly quantized stochastic process divided by minus the logarithm of the quantizer step size 1/m in the limit as m to infty. It is demonstrated that the information dimension rate coincides with the rate-distortion dimension, defined as twice the rate-distortion function R(D) of the stochastic process divided by -log (D) in the limit as D downarrow 0 . It is further shown that among all multivariate stationary processes with a given (matrix-valued) spectral distribution function (SDF), the Gaussian process has the largest information dimension rate and the information dimension rate of multivariate stationary Gaussian processes is given by the average rank of the derivative of the SDF. The presented results reveal that the fundamental limits of almost zero-distortion recovery via compressible signal pursuit and almost lossless analog compression are different in general. ; The work of Bernhard C. Geiger has partly been funded by the Erwin Schrödinger Fellowship J 3765 of the Austrian Science Fund and by the German Ministry of Education and Research in the framework of an Alexander von Humboldt Professorship. The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Digital and Economic Affairs, and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG. The work of Tobias Koch has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, from the Ministerio de EconomÍa y Competitividad of Spain under Grants TEC2013-41718-R, RYC-2014-16332, and TEC2016-78434-C3-3-R (AEI/FEDER, EU), and from the Comunidad de Madrid under Grant S2103/ICE-2845.
Proceeding of: 2019 IEEE International Symposium on Information Theory (ISIT), 7-12 July 2019, Paris, France ; We consider a Gaussian multiple-access channel where the number of transmitters grows with the blocklength n. For this setup, the maximum number of bits that can be transmitted reliably per unit-energy is analyzed. We show that if the number of users is of an order strictly above n/log n, then the users cannot achieve any positive rate per unit-energy. In contrast, if the number of users is of order strictly below n/log n, then each user can achieve the single-user capacity per unit-energy (log e)/N 0 (where N 0 /2 is the noise power) by using an orthogonal access scheme such as time division multiple access. We further demonstrate that orthogonal codebooks, which achieve the capacity per unit-energy when the number of users is bounded, can be strictly suboptimal. ; J. Ravi and T. Koch have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No. 714161). T. Koch has further received funding from the Spanish Ministerio de Economía y Competitividad under Grants RYC-2014-16332 and TEC2016-78434-C3-3-R (AEI/FEDER, EU).
The nonnegativity of relative entropy implies that the differential entropy of a random vector X with probability density function (pdf) f is upper-bounded by -E[log g(X)]for any arbitrary pdf g. Using this inequality with a cleverly chosen g, we derive a lower bound on the asymptotic excess rate of entropy-constrained vector quantization for d-dimensional sources and rth-power distortion, where the asymptotic excess rate is defined as the difference between the smallest output entropy of a vector quantizer satisfying the distortion constraint and the rate-distortion function in the limit as the distortion tends to zero. Specialized to the one-dimensional case, this lower bound coincides with the asymptotic excess rate achieved by a uniform quantizer, thereby recovering the result by Gish and Pierce that uniform quantizers are asymptotically optimal as the allowed distortion tends to zero. Furthermore, in the one-dimensional case, the derivation of the lower bound reveals a necessary condition for a sequence of quantizers to be asymptotically optimal. This condition implies that any sequence of asymptotically optimal almost-regular quantizers must converge to a uniform quantizer as the distortion tends to zero. While the obtained lower bound itself is not novel, to the best of our knowledge, we present the first rigorous derivation that follows the direct approach by Gish and Pierce without resorting to heuristic high-resolution approximations commonly found in the quantization literature. Furthermore, our derivation holds for all d-dimensional sources having finite differential entropy and whose integer part has finite entropy. In contrast to Gish and Pierce, we do not require additional constraints on the continuity or decay of the source pdf. ; This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, from the Ministerio de Economía y Competitividad of Spain under Grants TEC2013-41718-R, RYC-2014-16332, IJCI-2015-27020, TEC2015-69648-REDC, and TEC2016-78434-C3-3-R (AEI/FEDER, EU), and from the Comunidad de Madrid under Grant S2103/ICE-2845. The material in this paper was presented in part at the 2016 IEEE International Symposium on Information Theory, Barcelona, Spain, July 2016.
Proceeding of: 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25-30 June 2017 ; Jalali and Poor ("Universal compressed sensing," arXiv:1406.7807v3, Jan. 2016) have recently proposed a generalization of Rényi's information dimension to stationary stochastic processes by defining the information dimension of the stochastic process as the information dimension of k samples divided by k in the limit as k →∞ to. This paper proposes an alternative definition of information dimension as the entropy rate of the uniformly-quantized stochastic process divided by minus the logarithm of the quantizer step size 1/m in the limit as m →∞ ; to. It is demonstrated that both definitions are equivalent for stochastic processes that are ψ*-mixing, but that they may differ in general. In particular, it is shown that for Gaussian processes with essentially-bounded power spectral density (PSD), the proposed information dimension equals the Lebesgue measure of the PSD's support. This is in stark contrast to the information dimension proposed by Jalali and Poor, which is 1 if the process's PSD is positive on a set of positive Lebesgue measure, irrespective of its support size. ; The work of Bernhard C. Geiger has been funded by the Erwin Schrödinger Fellowship J 3765 of the Austrian Science Fund and by the German Ministry of Education and Research in the framework of an Alexander von Humboldt Professorship. The work of Tobias Koch has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, from the Spanish Ministerio de Economía y Competitividad under Grants TEC2013- 41718-R, RYC-2014-16332 and TEC2016-78434-C3-3-R (AEI/FEDER, EU), and from the Comunidad de Madrid under Grant S2103/ICE-2845.
This article concerns the maximum coding rate at which data can be transmitted over a noncoherent, single-antenna, Rayleigh block-fading channel using an error-correcting code of a given blocklength with a block-error probability not exceeding a given value. A high-SNR normal approximation of the maximum coding rate is presented that becomes accurate as the signal-to-noise ratio (SNR) and the number of coherence intervals $L$ over which we code tend to infinity. Numerical analyses suggest that the approximation is accurate at SNR values above 15dB and when the number of coherence intervals is 10 or more. ; The work of A. Lancho and T. Koch was supported in part by the Spanish Ministerio de Economia y Competitividad under Grant TEC2013-41718-R and Grant TEC2016-78434-C3-3-R (AEI/FEDER, EU), in part by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under Grant 714161, and in part by the Comunidad de Madrid under Grant S2103/ICE-2845. The work of A. Lancho further was supported by an FPU fellowship from the Spanish Ministerio de Educación, Cultura y Deporte under Grant FPU14/01274. The work of T. Koch further was supported in part by the Spanish Ministerio de Economía y Competitividad under Grant RYC-2014-16332 and in part by the 7th European Union Framework Programme under Grant 333680. The work of G. Durisi was supported by the Swedish Research Council under Grant 2012-4571 and Grant 2016-03293.
In this paper, we analyze the tradeoff between coding rate and asymptotic performance of a class of generalized low-density parity-check (GLDPC) codes constructed by including a certain fraction of generalized constraint (GC) nodes in the graph. The rate of the GLDPC ensemble is bounded using classical results on linear block codes, namely, Hamming bound and Varshamov bound. We also study the impact of the decoding method used at GC nodes. To incorporate both bounded-distance (BD) and maximum likelihood (ML) decoding at GC nodes into our analysis without resorting on multi-edge type of degree distributions (DDs), we propose the probabilistic peeling decoding (P-PD) algorithm, which models the decoding step at every GC node as an instance of a Bernoulli random variable with a successful decoding probability that depends on both the GC block code and its decoding algorithm. The P-PD asymptotic performance over the BEC can be efficiently predicted using standard techniques for LDPC codes such as density evolution (DE) or the differential equation method. Furthermore, for a class of GLDPC ensembles, we demonstrate that the simulated P-PD performance accurately predicts the actual performance of the GLPDC code under ML decoding at GC nodes. We illustrate our analysis for GLDPC code ensembles with regular and irregular DDs. In all cases, we show that a large fraction of GC nodes is required to reduce the original gap to capacity, but the optimal fraction is strictly smaller than one. We then consider techniques to further reduce the gap to capacity by means of random puncturing, and the inclusion of a certain fraction of generalized variable nodes in the graph. ; This work was supported in part by the Spanish Ministerio de Economía y Competitividad and the Agencia Española de Investigación under Grant TEC2016-78434-C3-3-R (AEI/FEDER, EU) and in part by the Comunidad de Madrid in Spain under Grant S2103/ICE-2845, Grant IND2017/TIC-7618, Grant IND2018/TIC-9649, and Grant Y2018/TCS-4705. P. M. Olmos was further supported by the Spanish Ministerio de Economía y Competitividad under Grant IJCI-2014-19150. T. Koch was further supported by the European Research Council (ERC) through the European Union's Horizon 2020 research and innovation programme under Grant 714161, by the 7th European Union Framework Programme under Grant 333680, and by the Spanish Ministerio de Economía y Competitividad under Grant TEC2013- 41718-R and Grant RYC-2014-16332.
Proceeding of: 52nd Annual Conference on Information Sciences and Systems (CISS 2018) ; Capacity and outage capacity characterize the maximum coding rate at which reliable communication is feasible when there are no constraints on the packet length. Evaluated for fading channels, they are important performance benchmarks for wireless communication systems. However, the latency of a communication system is proportional to the length of the packets it exchanges, so assuming that there are no constraints on the packet length may be overly optimistic for communication systems with stringent latency constraints. Recently, there has been great interest within the information theory community in characterizing the maximum coding rate for short packet lengths. Research on this topic is often concerned with asymptotic expansions of the coding rate with respect to the packet length, which then give rise to normal approximations. In this paper, we review existing normal approximations for single-antenna Rayleigh block-fading channels and compare them with the high-SNR normal approximation we presented at the 2017 IEEE International Symposium on Information Theory (Lancho, Koch, and Durisi, 2017). We further discuss how these normal approx- imations may help to assess the performance of communication protocols. ; A. Lancho and T. Koch have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 714161), from the Spanish Ministerio de Economía y Competitividad under Grants TEC2013-41718-R, RYC-2014-16332 and TEC2016-78434-C3-3-R (AEI/FEDER, EU), from an FPU fellowship from the Spanish Ministerio de Educación, Cultura y Deporte under Grant FPU14/01274, and from the Comunidad de Madrid under Grant S2103/ICE-2845. G. Durisi has been supported by the Swedish Research Council under Grant and 2016-03293.
Proceeding of: 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25-30 June, 2017 ; In this paper, we analyze the tradeoff between coding rate and asymptotic performance of a class of generalized low-density parity-check (GLDPC) codes constructed by including a certain fraction of generalized constraint (GC) nodes in the graph. The rate of the GLDPC ensemble is bounded using classical results on linear block codes, namely Hamming bound and Varshamov bound. We also study the impact of the decoding method used at GC nodes. To incorporate both bounded-distance (BD) and Maximum Likelihood (ML) decoding at GC nodes into our analysis without having to resort on multi-edge type of degree distributions (DDs), we propose the probabilistic peeling decoder (P-PD) algorithm, which models the decoding step at every GC node as an instance of a Bernoulli random variable with a success probability that depends on the GC block code and its decoding algorithm. The P-PD asymptotic performance over the BEC can be efficiently predicted using standard techniques for LDPC codes such as density evolution (DE) or the differential equation method. Furthermore, for a class of GLDPC ensembles, we demonstrate that the simulated P-PD performance accurately predicts the actual performance of the GLPDC code. We illustrate our analysis for GLDPC code ensembles using (2, 6) and (2,15) base DDs. In all cases, we show that a large fraction of GC nodes is required to reduce the original gap to capacity. ; This work has been funded in part by the Spanish Ministerio de Economía y Competitividad and the Agencia Española de Investigación under Grant TEC2016-78434-C3-3-R (AEI/FEDER, EU) and by the Comunidad de Madrid in Spain under Grant S2103/ICE-2845. T. Koch has further received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, and from the Spanish Ministerio de Economía y Competitividad under Grants TEC2013-41718-R and RYC-2014-16332. Pablo M. Olmos has further received funding from the Spanish Ministerio de Economía y Competitividad under Grant IJCI-2014-19150.
Proceeding of: 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25-30 June, 2017 ; This paper concerns the maximal achievable rate at which data can be transmitted over a non-coherent, single-antenna, Rayleigh block-fading channel using an error-correcting code of a given blocklength with a block-error probability not exceeding a given value. In particular, a high-SNR normal approximation of the maximal achievable rate is presented that becomes accurate as the signal-to-noise ratio (SNR) and the number of coherence intervals L over which we code tend to infinity. Numerical analyses suggest that the approximation is accurate already at SNR values of 15 dB. ; A. Lancho and T. Koch have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, from the Spanish Ministerio de Economía y Competitividad under Grants TEC2013-41718-R, RYC-2014-16332 and TEC2016-78434-C3-3-R (AEI/FEDER, EU), from an FPU fellowship from the Spanish Ministerio de Educación, Cultura y Deporte under Grant FPU14/01274 and from the Comunidad de Madrid under Grant S2103/ICE-2845. G. Durisi has been supported by the Swedish Research Council under Grants 2012-4571 and 2016-03293.
The material in this paper was presented in part at the 2011 IEEE International Symposium on Information Theory (ISIT), Saint Petersburg, Russia, 31 July-5 August 2011 and the 49th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, USA, 28-30 September 2011. ; We study the information rates of noncoherent, stationary, Gaussian, and multiple-input multiple-output (MIMO) flat-fading channels that are achievable with nearest neighbor decoding and pilot-aided channel estimation. In particular, we investigate the behavior of these achievable rates in the limit as the signal-to-noise ratio (SNR) tends to infinity by analyzing the capacity pre-log, which is defined as the limiting ratio of the capacity to the logarithm of the SNR as the SNR tends to infinity. We demonstrate that a scheme estimating the channel using pilot symbols and detecting the message using nearest neighbor decoding (while assuming that the channel estimation is perfect) essentially achieves the capacity pre-log of noncoherent multiple-input single-output flat-fading channels, and it essentially achieves the best so far known lower bound on the capacity pre-log of noncoherent MIMO flat-fading channels. Extending the analysis to fading multiple-access channels reveals interesting relationships between the number of antennas and Doppler bandwidth in the comparative performance of joint transmission and time division multiple-access. ; The work of A.T.A was supported in part by the Yousef Jameel Scholarship at the University of Cambridge. T. K. received funding from the European's Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 252663, from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 714161), and from the Spanish Ministerio de Economía y Competitividad under grant RYC-2014-16332. The work of A.G.i.F. has been funded by the European Research Council under ERC grant agreements 259663 and 725411.
This paper investigates the maximal achievable rate for a given blocklength and error probability over quasi-static multiple-input multiple-output fading channels, with and without channel state information at the transmitter and/or the receiver. The principal finding is that outage capacity, despite being an asymptotic quantity, is a sharp proxy for the finite-blocklength fundamental limits of slow-fading channels. Specifically, the channel dispersion is shown to be zero regardless of whether the fading realizations are available at both transmitter and receiver, at only one of them, or at neither of them. These results follow from analytically tractable converse and achievability bounds. Numerical evaluation of these bounds verifies that zero dispersion may indeed imply fast convergence to the outage capacity as the blocklength increases. In the example of a particular 1 × 2 single-input multiple-output Rician fading channel, the blocklength required to achieve 90% of capacity is about an order of magnitude smaller compared with the blocklength required for an AWGN channel with the same capacity. For this specific scenario, the coding/decoding schemes adopted in the LTE-Advanced standard are benchmarked against the finite-blocklength achievability and converse bounds. ; This work was supported in part by the Swedish Research Council under grant 2012-4571, by the Ericsson Research Foundation under grant FOSTIFT- 12:022, by a Marie Curie FP7 Integration Grant within the 7th European Union Framework Programme under Grant 333680, by the Spanish government (TEC2009-14504-C02-01, CSD2008-00010, and TEC2012-38800-C03-01), and by the National Science Foundation under Grant CCF-1253205. The material of this paper was presented in part at the 2013 and 2014 IEEE International Symposium on Information Theory.