« A directional antenna for VHF TV — it's Large! | Main | The Wi-Fi solution for mobile data offload »

March 06, 2010

Comments

John

The core argument you use is that paired spectrum leads to symmetric speeds. This is simply not true.

In addition see some details about actual FDD speeds being achieved in the Telia Sweden network:

http://northstream.se/lte-experience-%e2%80%93-it%e2%80%99s-getting-better/

Franz

Hi there,

You are making a huge assumption here based on the past about asymmetrical spectrum demand. Nowdays killer applications like facebook, twitter, different blogging systems can generate even more traffic in uplink then downlink. On top of that, LTE uses two different technologies for uplink and downlink, equals bandwidth doesn't mean the same user throughput.

brough

John, I'm aware that mobile device battery considerations typically result in asymmetric speeds, i.e. the uplink is less capable. My key point is that data traffic is typically not symmetric and, more to the point, the uplink and downlink loads vary second by second. TDD allows the total available resource to be dynamically partitioned based on instantaneous demand. FDD fixes the uplink/downlink allocation - something that seemed reasonable for the symmetric load that voice traffic provides but which makes no sense for dynamically varying data traffic.

Franz, the important thing about data is not the long term average (which may be more symmetric in the future, but is still heavily asymmetric today), the issue is instantaneous traffic is highly asymmetric. Indeed, the statistics of IP traffic are highly variable until you are able to average hundreds, or better thousands, of active users' traffic. I posted quite a bit of data on instantaneous data traffic back in December 2009. See for example: http://blogs.broughturner.com/2009/12/broadband-capacity-sizing-buffers-to-handle-traffic-bursts.html

Ignacio Berberana

The idea that you can allocate TDD capacity dynamically between uplink and downlink is not true in many operating conditions, e.g., for most cellular networks. Imagine two users, close to each other, but connected to different base stations (they are on their respective cell edge). If one is tramitting and the other is receiving at the same point in time the latter could be heavely interfered by the former. In practice you should have a fixed allocation of resources between UL and DL, and you should have the base stations tightly synchronized, and this is complicated in you want to support a hierachical network with macro, micro and femto cells.

There are other technical reasons to favor FDD over TDD and the question is not as simple as you indicate. The problem of traffic assymetry, on the other hand, may not be so important as you suggest, as UL usually has a lower spectral efficiency than DL for several reasons.

brough

Ignacio, I disagree. You can certainly coordinate TDD frames between adjacent cell sites if you are designing a system from scratch and for the future. Yes, it's hard to imagine inter-cellsite coordination with 2 Mbps backhaul, but with 100 Mbps backhaul it's certainly feasible to coordinate TDD frame structure at a 10 ms or 20 ms frame rate. Also, as we look over the life of LTE, we should see widespread deployment, and continuous improvement, in digital beam forming which will tend to reduce the occasions when inter-cell coordination is needed.

Again, short term variations in IP traffic are extreme (see the 3rd diagram here: http://su.pr/1Vmge3) and very independent of traffic averages. Being able to reallocate up/down bandwidth every 20 ms is a substantial advantage over just fixing bandwidth for all time.

Ignacio Berberana

Brough, even if you are able to coordinate the frames between adjacent cells in such a way that uplink in one of them does not overlap downlink in the other, the fact is that you cannot take full advantage of variability of the traffic. Imagine the case that one user is uploading a big file and the other is downloading also a big file: there is no way that you can assign most of resources to the uplink in one cell and most of them to the downlink simultaneously without incurring in interference. Using beam forming in the terminal could be a solution, but a long term one, and I think that your point is that opting for FDD mode now (not in the future) is stupid. Also take into account that this is not the only problem in terms of interference to be dealt with: you may also have base station on base station interferences (very likely to happen in a macrocell environment). At the end, the solution would be to assign the same resources to DL and UL in a large pool of synchronized base stations. You may change that allocation in a rapid way, as you suggest, but you cannot adapt the frame structure in each cell to the instantaneous traffic demand.

Now, the main reason for prefering FDD to TDD from the spectral efficiency viewpoint is that the former allows for easier and more effective fast modulation and coding adaptation and scheduling, i.e., to ride on the high an lows of propagation channel. Also to support faster HARQ processes. These features, that are responsible for a significant part of the increased spectral efficiency of MBB systems like LTE and WiMAX, are designed to take advantage of the traffic variability.

There are advantages associated to TDD, of course, but the issue is not as crystal clear as you indicate in your comment. And, on the other hand, there is a TDD mode for LTE, which will be deployed in China.

Franz

I agree with you Ignacio,

Brough, TDD needs synchronization, which has it's own price. Without sync, it's a perfect solultion for traffic hotspots, but in this case why do you want to deploy LTE TDD if you can use future WiFi standards which will provide much more capacity then LTE. Spectrum efficiency could be much higher in synchronized networks, that's true. But even nowdays in legacy systems, if operators can have much better performance, for example GSM networks with synchronized hopping, they don't do this... because of the cost, even if it's only license for additional feature.

brough

Ignacio (and Franz),
When designing a system that will have a useful life of 20+ years, it's important to leverage what's technically feasible during that period. So if you call the commercial life of LTE "long term," then yes, we need to think long term. After all, LTE will be widely deployed perhaps 5+ years from now and adoption will peak 10-20 years from now. Useful LTE deployments should have 100+ Mbps backhaul per cell site today and certainly will have such backhaul links 5 years from now. With 100+ Mbps links, what is the problem about jointly optimizing TDD frame structures across adjacent cells? Whether you do this at LTE's 10 ms frame rate or the 1 ms sub-frame rate, it's certainly feasible. Also, you don't require joint optimization across a huge pool of cell sites. You require local optimization between adjacent cells for just a percentage of the traffic that represents those mobile devices near cell boundaries.

While you will never perfectly match instantaneous traffic demands, dynamic allocation can do vastly better than a fixed allocation. Considering the highly variable nature of IP traffic, when sampled at 1 ms and 10 ms intervals, this is a major advantage for TDD.

Also note that frame rates are the same for FDD & TDD (as defined for LTE). Thus there is no impact on HARQ processes. Also, there is a (slight) additional advantage for TDD in that the instantaneous radio channel characteristics are directly available from the local receiver without waiting for information from the receiver at the other end of an FDD link.

By the way, you will only see beam forming at the base station, since beam forming requires physical separation of antenna elements by significant multiples of a wavelength. However, since beam forming can be used on both transmit and receive, beam forming at the basestation can be highly effective for both directions.

I'm surprised neither of you raised the one argument where FDD may have an advantage - different modulation in each direction. The FDD LTE uplink uses SC-FDMA (or more accurately, DFT-spread OFDM or linearly precoded OFDM) instead of the downlink's OFDMA. SC-FDMA has a better peak-to-average power ratio and thus simplifies the mobile device power amplifier and extends battery life. Even here, it should be possible to mix these modulations within a TDD frame as both are variants of OFDM, however I haven't studied the LTE TDD specs to know if this has been done.

Ignacio Berberana

Brough, I (partially) disagree again with you. I am not against TDD at all and I see a huge potential for it. But it seems to me that you are very much focused on an issue which is certainly very important (adapting the allocation of radio resources between UL and DL as a function of the traffic demand, if I understood it correctly) and ignore others of what is a very complex problem (and based on that, decide that the solution adopted by a majority of the operators is simply stupid, which I think is unfair).

One aspect that I am not sure if you are considering is the fact that most of capacity increase in mobile networks during the last 30 years has been provided by infrastructure related solutions. Getting the network closer to the user is the only long term solution for providing the required capacity: micro/pico/femto cells, repeaters, relays, DAS, etc. A network with a high capillarity is obviously more difficult to coordinate, however large the backhaul capacity you may have. And to be frank, coordinating the allocation of resources between the UL and DL between cells, while defining the modulation and coding scheme to be used for each user, selecting the MIMO mode, scheduling resources among users (providing QoS if possible), etc., in such an environment looks to me a extremely complex problem. It is not only a question of having the capacity to exchange information between nodes, it is also the processing that should be carried out in each node.

Other issue is the fact that the main problem in mobile networks is interference. The capacity is limited by interference in most of the operating scenarios (backhaul problems are not as severe as usually assumed and can certainly more easily solved). If you have to choose between a solution that may pose some additional interference problems but a larger capacity (if everything goes right) or one that limits the interference problems at the cost of some inefficiency in the resource allocation, the selection is clear. The real problem of mobile broadband is not to achieve 100 Mbit/s for the users that are close to the base station, is to achieve 1 Mbit/s (or even 300 kbit/s) for the users at the cell edge. This is the problem that LTE-Advanced and 802.16m are intending to solve with cooperative MIMO, relays and other solutions.

It is not true that TDD mode HARQ performance is the same as FDD. In a TDD frame ACKs and NACKs can only be sent when the resources are allocated to each direction. So, for example, if you have a 10 ms frame and allocate 6 ms to DL subframes and 4 ms to UL subframes, then for the DL block sent in the first subframe the ACK can only be sent in the UL after 6 ms. If this allocation becomes variable, then the HARQ process becomes asynchronous, which may be less efficient. In the FDD system you have not this limitation. Also the adoption of the modulation and coding scheme or the MIMO mode to be used should be decided at a frame rate.

Beam forming does not require the antennas being separated by multiples of wavelength (this is required for other forms of MIMO, like spatial multiplexing). In fact there are some companies proposing beam forming capabilities for the terminal, like Magnolia.

The use of SC-FDMA is common to FDD and TDD LTE. I am not sure that this is an important advantage with respect to other options, like OFDMA (in WiMAX). PAPR problems can be solved with other means and OFDMA makes it easier to implement advanced receivers, like maximum likelihood.

You indicate that we should think for the long term, and you are certainly right. In this sense, are you sure that the IP traffic patterns are not going to change over the next 20 years? Of course, about this point, I must admit my ignorance.

renaissance costume

I agree to what John have said. Nowdays killer applications like facebook, twitter, different blogging systems can generate even more traffic in uplink then downlink. On top of that, LTE uses two different technologies for uplink and downlink, equals bandwidth doesn't mean the same user throughput.

Medieval Dress

LTE must be replaced with a new technology that will reduce traffic.

Generador Electrico

Buena información. Gran artículo, soy fan de su sitio, continúe con su excelente trabajo y seré un visitante habitual por mucho tiempo.

The comments to this entry are closed.

My Photo

Search this Blog

Subscribe by Email

March 2014

Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          

Technorati


Site Meter

Upcoming Travel & Conferences


Twitter Feed

Become a Fan