LTE Technology Overview (1) - LTE Scalable Bandwidth
LTE Technology Overview (2) - LTE different Frequency Bands
LTE Technology Overview (3) – Duplex Scheme support for both FDD and TDD
LTE Technology Overview (4) – MIMO
LTE Technology Overview (5) – OFDMA and SC-FDMA
LTE Technology Overview (6) – Modulation Scheme (formats)
LTE Technology Overview (7) – Higher Peak Data Rate
LTE Technology Overview (8) – Reduced Latency
LTE Technology Overview (9) – Transmission Time Interval (TTI)
LTE Technology Overview (10) – HARQ
LTE Technology Overview (11) – Flat Architecture
LTE Technology Overview (12) – Coexistence and Handoff
§ LTE is flexible, with a range of spectrum options from 1.25 MHz to 20 MHz (bandwidth to scale), and beyond - Capable of flexible operation in different spectrum allocations (operate in 1.25, 1.6, 2.5, 5, 10, 15 and 20MHz scalable bandwidths allocation, uplink and downlink, paired and unpaired.
§ Some operators cannot find 6 MHz of spectrum for HSPA but LTE can use as little as 1.5 MHz or 1.7 MHz.
§ It can use TDD unpaired spectrum as well as FDD paired spectrum.
§ Supported 7 types of bandwidths are 1.25MHz, 1.6MHz, 2.5MHz, 5MHz, 10MHz, 15MHz, and 20MHz.
§ Smaller bandwidths covering 1.25 MHz, 2.5 MHz, 5 MHz, 10 MHz and 15 MHz for narrow allocations.
§ Also 1.6 MHz is considered for specific cases, especially for the unpaired frequency band with spectrum compatibility with LCR-TDD
§ LTE is planned to be three to four times more efficient in the downlink than Release 6 HSDPA and two to three times more efficient in the uplink.
LTE deployable in any of the “3GPP frequency bands”
§ FDD : 2.6 GHz, 2.3 GHz, 2.1 GHz,1900 MHz,1800 MHz,1700/2100 MHz, 1500 MHz, 900 MHz, 850 MHz, 700 MHz, 450 MHz…….
§ TDD : 2.6 GHz, 2.3 GHz, 1.9/2.1 GHz………
§ This allows it to be used in CDMA2000 and GSM spectrum allocations, although LTE is likely also to have its “own” spectrum at around 2.5 GHz.
§ LTE can use different channel bandwidths in different frequency bands
§ LTE is flexible, with a range of spectrum options from 1.25 MHz to 20 MHz (bandwidth to scale), and beyond. It can use TDD unpaired spectrum as well as FDD paired spectrum.
§ Duplex Scheme: LTE E-UTRA support both FDD and TDD modes that can use TDD unpaired spectrum as well as FDD paired spectrum using the greatest possible commonality of design.
§ FDD and TDD can be supported in the same BS (eNode) platform
§ For example, 1.6 MHz bandwidth is considered for specific cases, especially for the unpaired frequency band with spectrum compatibility with LCR-TDD
§ HSPA and GSM cannot operate in TDD spectrum but LTE can (as well as in paired FDD bands).
§ New unpaired (TDD) spectrum resources are likely to become available and LTE can use these resources for new services.
§ LTE is that it is expected to be an FDD system with paired spectrum (although support for TDD is also included).
§ Specified for FDD and TDD bands and will cope better with possible wider availability of unpaired TDD spectrum in the future.
FDD: Simultaneous downlink/uplink transmission in separate freq. bands
§ Paired spectrum requried
§ Used in all commercial cellular systems
§ FDD preferred if paired spectrum avaiillablle
TDD: Non-overlapping downlink/uplink transmisson in the same freq. band
§ Possibility for deployment in single (unpaired) spectrum
§ Need for tight inter-cell synchronization/coordination
§ Reduced coverage due to non-continuous transmission (duty cycle <>
§ TDD as compllement to support deplloyment in unpaiired spectrum
§ While FDD makes use of paired spectra for UL and DL transmission separated by a duplex frequency gap, TDD is alternating using the same spectral resources used for UL and DL, separated by guard time.
§ Each mode (FDD/TDD) has its own frame structure in LTE and these are aligned with each other meaning that similar hardware can be used in the base stations and terminals to allow for economy of scale.
§ The TDD mode in LTE is aligned with TD-SCDMA as well allowing for coexistence.
§ LTE Release is the Multiple Input Multiple Output (MIMO) operation including spatial multiplexing (MIMO-A : SM) as well as pre-coding and transmit diversity.
§ In LTE downlink, MIMO employs multiple antennas at both base-station transmitter and terminal receiver.
§ E-UTRAN system uses OFDMA for the downlink (tower to handset) and Single Carrier FDMA (SC-FDMA) for the uplink and employs MIMO with up to four antennas per station.
§ E-UTRA uses OFDM and MIMO antenna technology to support more users, higher data rates and lower processing power required on each handset
§ LTE uplink supports the use of MIMO technology.
§ Only diversity (nx1 or 1xn) or downlink MIMO (Multiple In Multiple Out antennas 4x2 and 2x2) are being developed - Both 2 transmitter x 2 receiver designs and 4 transmitter x 4 receiver designs are being considered.
§ While the device is using only one transmit antenna, the single user data rate cannot be increased with MIMO.
§ The cell level maximum data rate can be doubled, however, by the allocation of two devices with orthogonal reference signals.
§ The smart antenna technology MIMO is considered essential for LTE to maximize system capacity and provide high data rates.
§ The beamforming has been excluded from the standards, since this works best in TDD systems where the uplink can be used to accurately estimate the downlink channel.
§ Smart antennas for the downlink (4x2, 2x2, 1x2) and uplink (1x2 and 1x1);
The 3GPP Release 8 air interface is being referred to as E-UTRA (Evolved UTRA).
§ The proposed E-UTRA system is assumed to use OFDMA for the downlink and single carrier FDMA (SC-FDMA) for the uplink.
§ Both OFDMA and SC-FDMA are very much related in terms of technical implementation and rely on the use of FFT/IFFT in the transmitter and receiver chain implementation.
§ Multiple antenna operation with spatial multiplexing (MIMO) has been a fundamental technology of LTE from the outset, and is well suited for LTE multiple access solutions.
§ The adoption of OFDM technology，the spectrum re-usage is improved
§ OFDM inherently supports multiple tones and LTE will be profiled for a different number of tones to use. This will make upgrades of GSM easier to LTE.
§ LTE uses OFDMA (Orthogonal Division Multiple Access – the same as used by WiMAX) in the downlink to achieve the required higher data rates and efficiencies
§ The OFDMA is used in the downlink direction to minimize receiver complexity, especially with large bandwidths, and to enable frequency domain scheduling with flexibility in resource allocation.
§ SC-FDMA (Single Carrier Frequency Division Multiple Access) in the uplink;
§ The SC-FDMA is used to optimize the range and power consumption in the uplink
§ SC-FDMA using in LTE uplink greatly reduces the required high PAPR with better battery consumption in user device that are the issues of OFDM.
§ One of the issues of OFDM is battery life in user devices.
§ The major disadvantage of OFDM as used in WIMAX uplink, certainly for mobile terminals, is that it requires a high Peak to Average Power Ratio (PAPR) – this happens because the individual signal modulations on the many orthogonal carriers occasionally peak together requiring a very high transmit power.
§ LTE includes a mechanism to group the uplink tones, which reduces the need for linearity in the power amplifier and thereby lowers power consumption.
§ The problem comes in the use of a power amplifier just before transmission – Unfortunately for OFDM, the amplifier needs to be operating in a linear region even at peak transmission levels – so the average operating point has to be backed-off a long way down in the linear region – where the power efficiency of the amplifier is poor: 20% or so.
§ The trade-off, however, is that multipath is no longer eliminated (since the 50Mbit/s uplink is now confined to a narrower bandwidth) and adaptive equalization is needed at the base-station.
§ This is not such a disadvantage as a complex, power-hungry equalizer at the base-station is cheap and has no implications on the handset battery life.
§ In contrast to a single carrier system like a conventional Frequency Division Multiplex (FDM) system, OFDM is a transmission scheme using multiple sub-carriers in a manner that the provided bandwidth is split into multiple narrowband sub-carriers and
§ The sub-carriers are closely spaced to each other without causing interference while removing guard bands between adjacent sub-carriers.
§ This is possible because the peak of one sub-carrier coincides with the null of an adjacent sub-carrier (i.e., the sub-carriers are orthogonal to one another).
§ Sub-channelization defines sub-channels that can be allocated to Mobile Stations (MS) depending on their channel conditions and data requirements.
§ Using sub-channelization, a Mobile WiMAX Base Station (BS) can allocate within the same time slot more transmission power for lower Signal-to-Noise Ratio (SNR) cases and less power for higher SNR cases.
§ In addition, sub-channelization in the uplink can save a MS transmission power because it can concentrate power only on certain sub-channel(s) allocated to it.
Difference between OFDM and OFDMA
§ The difference between OFDM and OFDMA is that OFDMA has the ability to dynamically assign a subset of sub-carriers to individual users, adapting the technology to the mobility demands.
§ For this reason, OFDMA is chosen by the WiMAX Forum to form the backbone of Mobile WiMAX
OFDMA divides the sub-carriers into three types:
§ data sub-carriers for data transmission,
§ pilot sub-carriers for estimation and synchronization purposes, and
§ null sub-carriers used for guard bands (no transmission).
§ Among the sub-carriers, data and pilot sub-carriers are grouped into subsets of sub-carriers called sub-channels. The PHY of Mobile WiMAX uses OFDMA to support sub-channelization in both downlink and uplink. But for LTE is adopted in DL only (SC-FDMA is used for UL in LTE).
§ In GSM, the PAPR is much lower and the average operating point of the power amp can be set much higher – yielding efficiency nearer 70%. For handheld terminals this can make a huge difference to the power consumption of the terminal.
§ The LTE uplink gets around this by using SC-FDMA which is just like OFDM with orthogonal frequencies but, instead of each terminal using all the sub carriers, only a few (different) carriers is used by each terminal in SC-OFDM. This greatly reduces the PAPR with better battery consumption.
LTE modulation constellations
§ HSDPA supports 16QAM，WiMAX and LTE can supports up to 64QAM
§ The modulation methods available (for user data) in LTE are Quadrature Phase Shift Keying (QPSK), 16QAM and 64QAM.
§ The first two are available in all devices while the support for 64QAM in the uplink direction is a UE capability
§ Binary Phase Shift Keying (BPSK) has been specified for control channels, which use either BPSK or QPSK for control information transmission.
§ For higher order modulations, 16QAM and 64QAM, the impact of the UE speed is higher
§ Higher order Modulation will have Higher Throughput
Supported (DL) modulation formats:
§ Modulation / Multiple Access: Downlink: OFDM / OFDMA
§ Frequency selective scheduling in DL (i.e. OFDMA)
§ Adaptive modulation and coding (up to 64-QAM)
§ Adaptive modulation in downlink data channels are QPSK, 16 QAM and 64 QAM.
§ #subcarriers scales with bandwidth (76 ... 1201)
§ Allows simple receivers in the terminal in case of large bandwidth
§ The adoption of OFDMA for the downlink enabled better support of different bandwidth options.
§ For the feasibility studies, several advanced techniques are being considered on top of the basic OFDMA operation including frequency domain scheduling, MIMO antenna technologies, and variable coding and modulation.
Supported (UL) modulation formats:
§ For uplink, the focus is on a new approach, SC-FDMA (Single Carrier - Frequency Division Multiple Access).
§ Modulation / Multiple Access: Downlink: SC-FDMA (A FFT-based transmission scheme like OFDM)
§ Adaptive modulation in uplink data channels are BPSK, QPSK, and 16 QAM.
§ With better PAPR (Peak-to-Average Power Ratio)
§ In order to reach PAPR identified as a critical issue in the uplink where efficient amplifiers are required.
§ Another important requirement is to maximize coverage. The base station scheduler assigns a unique time-frequency interval to a terminal for the transmission of user data, providing intra-cell orthogonally.
§ LTE is significantly increased higher peak data rates
§ LTE has a goal to reach
§ Downlink speeds of 100Mbit/s (DL)
§ Uplink speeds of 50 Mbit/s (UL)
in a 20MHz downlink spectrum (i.e. 5 bit/s/Hz) with a latency (one way terminal to Node B) of 10 ms.
§ To increase of peak downlink spectral efficiency of 5 bits/s/Hz/cell with 2 UE Rx antennas, 3-4 times spectral efficiency of HSDPA with 2 Tx and 2 Rx antennas in a loaded cell
§ 50 Mbps in 20 MHz and increase in peak spectral efficiency of 2.5 bits/s/Hz/cell (2-3 times the spectral efficiency of E-DCH with 1 Tx and 2 Rx antennas in a loaded cell)
§ Improved latency (<30>10 ms round trip time between the terminal and the (evolved) eNode B.
§ This reduces latency and will deliver the required QoS to support VoIP services.
In 3GPP Release 8 for a cell of
§ Reduced User plane latency below 5 ms (UE to RAN edge) with 5 MHz or higher spectrum allocation.
§ With spectrum allocation below 5 MHz, latency below 10 ms should be possible
§ For user plane RAN portion the one way delays should be shorter than 5 ms.
§ Reduced Control plane latency from camped to active in under 100 ms - The state transition delay from none-active to active state should be within 100 ms.
§ TTI refers to the length of an independently decodable transmission on the radio link.
§ A TTI window is a time period over which UE monitors random access response messages from the eNB.
§ The TTI can be implicitly given by the NodeB in indicating the modulation, the coding scheme and the size of the transport blocks.
§ The required UE processing time depends on the length of the TTI, A shorter TTI leads to shorter required UE processing time.
§ For a given available processing time, a shorter TTI implies lower buffering requirements.
§ Fast scheduling with shorter TTI – allowing rapid adaptation to changing radio conditions.
§ By using shorter TTI or packet sizes, changes in the channel can feedback and apply faster on the downlink
§ The TTI can either be a semi-static or a dynamic transport channel attribute. The TTI is set through higher layer signaling in semi-static TTI case, or controlled by the eNodeB in a more dynamic way in order to improve HARQ process, for instance.
TTI in 3GPP:
§ In 3GPP R99: The Transmission Time Interval (TTI) is known as: 10, 20, 40 or 80 ms.
§ In UMTS R5, the shorter TTI for HSDPA is introduced & reduced to 2ms.
§ HSUPA adds the ability to shorten the transmit time (TTI) from 10 ms to 2 ms
§ In LTA_SAE：A Transmission Time Interval (TTI) of 1ms was agreed (to reduce signalling overhead and improve efficiency
§ The minimum size of the physical resources that can be allocated corresponds to the minimum TTI - The minimum downlink TTI duration is corresponding to one subframe duration, i.e. 1 ms.
Advantages for shorter TTI:
§ Faster response to link conditions and allowing the system to quick schedule transmissions to mobiles that will lead to increase in system capacity.
§ Less probability of an error due to the change of the channel conditions
§ More efficient when packet retransmission is necessary
§ Decreased buffer size
§ Fast PHY Layer HARQ in both down and uplinks in LTE;
§ HARQ is the function that integrates the existing Automatic Repeat Request (ARQ) function and an error correction technique (i.e., Chase combining (CC) and optionally Incremental Redundancy (IR)) that combines an error-detected packet with retransmission packets.
§ HARQ uses a PHY layer “Stop and Wait” protocol which provides fast response to packet errors and thus improves cell edge coverage.
§ When the data block cannot be decoded without error and the maximum number of HARQ transmissions is reached, a higher layer, such as MAC or TCP/IP, retransmits the data block. In that case, all previous transmissions are cleared, and the HARQ process start over.
§ Error detection in the HARQ system is implemented by allocating a dedicated ACK channel in the uplink to provide feedback (i.e., ACK or NACK signaling) for fast retransmission in case the packet is in error. The receiver will keep the error packet and implement CC or IR to jointly process the packets in error and new transmission to improve the packet reception.
§ Note: HARQ is fundamentally different from ARQ in the way of processing when it encounters a packet error. HARQ stores the packet as it is without discarding it and combines the packet with the resent one. This can reduce the occurrence of packet error more than the case of resending packet alone.
§ LTE ARQ/HARQ for DL in eNB and aGW
3 types of HARQ:
1) Chase Combining (CC):
§ Retransmits the same packet as that of the first attempt
§ The decoder combines multiple received copies of the coded packet weighted by the SNR prior to decoding.
§ This method provides time diversity gain and is very simple to implement.
2) Partial Incremental Redundancy (Partial IR):
§ Retransmits a partially different packet from the first one.
§ Each packet transmitted in the partial IR scheme is self-decodable because it has the systematic bits of turbo codes.
§ Instead of sending simple repeats of the entire coded packet, additional redundant information is incrementally transmitted.
3) Full Incremental Redundancy (Full IR):
§ retransmits a fully different packet from the first one.
§ retransmission packets are not self-decodable.
Comparison of HARQ Types:
§ IR usually yields better performance compared to Chase Combining. However, it requires more implementation complexity and may not result in good performance unless the link adaptation errors are very large.
§ Chase Combining yields reasonable performance with lower implementation complexity and cost.
§ HARQ in Layer 1: The HARQ functionality ensures delivery between peer entities at Layer 1 (MAC).
HARQ characteristics: The HARQ within the MAC sublayer has the following characteristics:
§ N-process Stop-And-Wait HARQ is used
§ The HARQ is based on ACK/NACKs
§ HARQ transmits and retransmits transport blocks;
§ Measurement gaps are of higher priority than HARQ retransmissions: whenever an HARQ retransmission collides with a measurement gap, the HARQ retransmission does not take place.
In the downlink:
Ÿ Asynchronous adaptive HARQ - In the downlink asynchronous retransmissions with adaptive transmission parameters are supported
Ÿ Uplink ACK/NAKs in response to downlink (re)transmissions are sent on PUCCH or PUSCH;
Ÿ PDCCH signals the HARQ process number and if it is a transmission or retransmission;
Ÿ Retransmissions are always scheduled through PDCCH.
In the uplink:
Ÿ Synchronous HARQ - In the uplink HARQ is based on synchronous retransmissions
Ÿ Maximum number of retransmissions configured per UE (as opposed to per radio bearer);
Ÿ Downlink ACK/NAKs in response to uplink (re)transmissions are sent on PHICH;
§ ARQ in Layer 2: The ARQ functionality provides error correction by retransmissions in acknowledged mode at Layer 2 (RLC).
ARQ characteristics: The ARQ within the RLC sublayer has the following characteristics:
§ ARQ retransmissions are based on HARQ/ARQ interactions
§ ARQ retransmits RLC PDUs or RLC SDU (IP packets) segments based on RLC status reports;
§ Polling for RLC status report is used when needed by RLC;
§ Status reports can be triggered by upper layers.
§ ARQ uses knowledge obtained from the HARQ about the transmission / reception status of a Transport Block
§ LTE supports a flat architecture (eNB and aGW), simpler than GSM and UMTS with their base station controllers (BSC). Base stations will provide more of the network processing power - the RNC and Node B are effectively combined into an eNode-B (eNB).
§ The major change in the LTE architecture is the transformation of the multi-tier RAN model into a flat two tiers’ structure.
§ The flat architecture reduces the number of deployment of network nodes (or equipment) and also reduces the delay during service delivery.
§ The System Architecture Evolution (SAE) is the 3GPP vision for the Evolved Packet Core (EPC).
§ The control plane is also simplified with a separate Mobility Management Entity (MME).
§ There are only two types of nodes in the user plane – base stations and anchor gateways. LTE has begun to deliver initial results at 70 Mb/sec with 20 MHz of spectrum. It is moving towards the promised data rates of a 100 Mb/sec.
§ E-UTRA is an entirely new air interface system, unrelated to and incompatible with W-CDMA.
§ LTE will also support seamless passing to cell towers with older network technology such as GSM, cdmaOne,W-CDMA (UMTS), and CDMA2000. The TDD mode in LTE is aligned with TD-SCDMA as well allowing for coexistence. à Session Convergence Using Evolved Packet Core (EPC)
§ All services are provided by the IMS with gateways to non IP networks à Service Convergence using IMS and Common IMS
§ LTE allows it to be used in CDMA2000 and GSM spectrum allocations,
§ Scalable Bandwidth in LTE that allows CDMA to migrate to LTE in a 5 MHz Band
§ Increased spectrum flexibility, with spectrum slices as small as 1.5 MHz (and as large as 20 MHz) supported (W-CDMA requires 5 MHz slices, leading to some problems with roll-outs of the technology in countries where 5 MHz is a commonly allocated amount of spectrum, and is frequently already in use with legacy standards such as 2G GSM and cdmaOne.) Limiting sizes to 5 MHz also limited the amount of bandwidth per handset
§ The market for LTE is likely to be driven not by WCDMA or HSPA operators but instead by GSM and CDMA2000 operators, wanting to provide additional services on their networks by adding LTE capacity as the network grows, using both new and existing spectrum licenses. LTE could also be the first global standard for mobile networks that brings together the GSM and CDMA2000 communities.