P. Asbeck, D. Bharadia, I. Galton. D. Hall, H.-P. Le, P. Mercier and G. Rebeiz, "Integrated Circuits for Wireless Communications: Research Activities at the University of California, San Diego," in IEEE Microwave Magazine, vol. 24, no. 5, pp. 30-44, May 2023

The continuing demand for improved wireless connectivity and enhanced data rates has spurred worldwide research in microwave and millimeter (mm)-wave circuits and systems within academia, government, and industrial centers. At the University of California, San Diego (UCSD), the Center for Wireless Communications (CWC) was established more than two decades ago and has contributed to analog, microwave, and millimeter circuits and systems research; communication and information theory; coding; and application studies for 4G, 5G, and 6G wireless systems. This article highlights recent UCSD research efforts, emphasizing the microwave and mm-wave circuits and systems and accompanying analog circuit techniques, carried out in conjunction with the CWC.


Achieving More with Less for Millimeter-Wave Systems, Profs. Bhaskar Rao and Piya Pal

In mmWave systems even though the number of antennas is large, to deal with the hardware complexity, the number of RF chains is limited. This calls for mixed processing, some RF combining followed by digital baseband processing. Motivated by the hardware constraints in mmWave systems, Prof. Rao and Prof. Pal are developing novel channel sensing and estimation methods that helps to `do more with less’. This design principle attempts to preserve the rich spatial sensing capabilities afforded by the multiple antennas in spite of the reduced number of RF chains. This is achieved by finding a suitable mapping from the high dimensional antenna space to the low dimensional RF chains space. Fortunately, the RF channel in the mmWave band is specular and spatially sparse and the work makes explicit use of this structure. Prof. Rao’s student Rohan Pote has been developing a novel spatial sensing method with a linear time invariant structure that leads to a virtual nested array in baseband. The sensing is complemented by Bayesian inference methods, in particular Sparse Bayesian Learning, to exploit the spatial characteristics. An important structure in this case is the Toeplitz structure of the spatial covariance matrix that results from a uniform linear array. This enables localizing multiple sources with a single snapshot, which is not possible with popular algorithms such as MUSIC (see above figure; some parameters: 12 sensors, 5 sources - their locations described by dotted lines). Prof. Pal’s students Pulak Sarangi and Mehmet Can Hucumenoglu have recently addressed an important outstanding question on sparse arrays (such as nested): How much temporal processing is needed to harness the benefits of the full virtual array? Since sparse arrays typically rely on the correlation between sensor pairs, this has led to the belief that sparse arrays require a large number of temporal measurements to reliably estimate parameters of interest from these, which may be undesirable for contemporary applications in autonomous sensing and high-mobility millimeter-wave channels. In our recent analysis, we dispel the ‘myth of large temporal snapshots’ by providing the first non-asymptotic results on the super-resolution performance of nested arrays. A key insight is that the large difference coarray of a well-designed sparse array can not only offer superior resolution capabilities and noise resilience (as empirically observed in prior works), but is also capable of compensating for the large covariance estimation error in the limited snapshot regime. More information can be found in the related publications from the groups or by contacting the faculty members.


A 140 GHz Scalable On-Grid 8x8-Element Transmit-Receive Phased-Array with Up/Down Converters and 64QAM/24 Gbps Data Rates

Gabriel M. Rebeiz, Amr Ahmed, Linjie Li, Minjae Jung

D-band communications (at 110-170 GHz) can be a key to fulfill the increasing demand on low latency and high data rate links. This is due to the wide unallocated frequency spectrum of up to 60 GHz. Hence, D-band wireless systems (140 GHz) have been an emerging topic for 6G communications. However, D-band systems are very challenging due to the need of large phase arrays to overcome the propagation loss (space loss factor).

At UCSD, we developed the first-ever on-grid 2-D 8x8-element transmit-receive array at 140 GHz with wide scanning angles and full array scalability. The 140 GHz wafer-scale RFIC (built in 45RFSOI) containing 64 channels and an up/down-converter to an IF of 9-14 GHz, the 8x8 antenna-in-package module and the DC/SPI boards are all designed at UCSD. The measured Tx and Rx phased-array system can support a communication link of up to 24 Gbps with <4% EVMrms for both transmit and receive operation. The measured TX EIRP is 37.5 dBm at 140 GHz which is the highest reported values for silicon technologies to-date.


J. Leitner, A. Behnke, P. Chiang, M. Ritter, M. Millen and S. Dey, "Classification of Patient Recovery from COVID-19 Symptoms using Consumer Wearables and Machine Learning," in IEEE Journal of Biomedical and Health Informatics.

Current remote monitoring of COVID-19 patients relies on manual symptom reporting, which is highly dependent on patient compliance. In this research, we present a machine learning (ML)-based remote monitoring method to estimate patient recovery from COVID-19 symptoms using automatically collected wearable device data, instead of relying on manually collected symptom data. We deploy our remote monitoring system, namely eCOVID, in two COVID-19 telemedicine clinics. Our system utilizes a Garmin wearable and symptom tracker mobile app for data collection. The data consists of vitals, lifestyle, and symptom information which is fused into an online report for clinicians to review. Symptom data collected via our mobile app is used to label the recovery status of each patient daily. We propose a ML-based binary patient recovery classifier which uses wearable data to estimate whether a patient has recovered from COVID-19 symptoms. We evaluate our method using leave-one-subject-out (LOSO) cross-validation, and find that Random Forest (RF) is the top performing model. Our method achieves an F1-score of 0.88 when applying our RF-based model personalization technique using weighted bootstrap aggregation. Our results demonstrate that ML-assisted remote monitoring using automatically collected wearable data can supplement or be used in place of manual daily symptom tracking which relies on patient compliance.


Gabriel Rebeiz awarded the 2022 Tatsuo Itoh Prize

CWC congratulates Gabriel Rebeiz and his student Omar El-Aassar for being awarded the 2022 Microwave and Wireless Components Letters Tatsuo Itoh Prize of the IEEE Microwave Theory and Technology Society (MTT-S) for the following paper: O. El-Aassar and G. Rebeiz, "A 120-GHz Bandwidth CMOS Distributed Power Amplifier with Multi-Drive Intra-Stack Coupling," IEEE Microwave and Wireless Components Letters, vol. 30, issue 8, pp.783-785, 2020. This paper presents a novel 120-GHz bandwidth distributed power amplifier (DPA) design in CMOS that improves both the power combining efficiency and the operation bandwidth. 


Rutvik V. Shah, Gillian Grennan, Mariam Zafar-Khan, Fahad Alim, Sujit Dey, Dhakshin Ramanathan, Jyoti Mishra, "Personalized machine learning of depressed mood using wearables." Nature Translational Psychiatry 11, Article number: 338, 2021

Depression is a multifaceted illness with large interindividual variability in clinical response to treatment. In the era of digital medicine and precision therapeutics, new personalized treatment approaches are warranted for depression. Here, we use a combination of longitudinal ecological momentary assessments of depression, neurocognitive sampling synchronized with electroencephalography, and lifestyle data from wearables to generate individualized predictions of depressed mood over a 1-month time period. This study, thus, develops a systematic pipeline for N-of-1 personalized modeling of depression using multiple modalities of data. In the models, we integrate seven types of supervised machine learning (ML) approaches for each individual, including ensemble learning and regression-based methods. All models were verified using fourfold nested cross-validation. The best-fit as benchmarked by the lowest mean absolute percentage error, was obtained by a different type of ML model for each individual, demonstrating that there is no one-size-fits-all strategy. The voting regressor, which is a composite strategy across ML models, was best performing on-average across subjects. However, the individually selected best-fit models still showed significantly less error than the voting regressor performance across subjects. For each individual’s best-fit personalized model, we further extracted top-feature predictors using Shapley statistics. Shapley values revealed distinct feature determinants of depression over time for each person ranging from co-morbid anxiety, to physical exercise, diet, momentary stress and breathing performance, sleep times, and neurocognition. In future, these personalized features can serve as targets for a personalized ML-guided, multimodal treatment strategy for depression.


Personalized Blood Pressure Estimation Using Photoplethysmography: A Transfer Learning Approach

In this paper, we present a personalized deep learning approach to estimate blood pressure (BP) using the photoplethysmogram (PPG) signal. We propose a hybrid neural network architecture consisting of convolutional, recurrent, and fully connected layers that operates directly on the raw PPG time series and provides BP estimation every 5 seconds. To address the problem of limited personal PPG and BP data for individuals, we propose a transfer learning technique that personalizes specific layers of a network pre-trained with abundant data from other patients. We use the MIMIC III database which contains PPG and continuous BP data measured invasively via an arterial catheter to develop and analyze our approach. Our transfer learning technique, namely BP-CRNN-Transfer, achieves a mean absolute error (MAE) of 3.52 and 2.20 mmHg for SBP and DBP estimation, respectively, outperforming existing methods. Our approach satisfies both the BHS and AAMI blood pressure measurement standards for SBP and DBP. Moreover, our results demonstrate that as little as 50 data samples per person are required to train accurate personalized models. We carry out Bland-Altman and correlation analysis to compare our method to the invasive arterial catheter, which is the gold-standard BP measurement method.


Motion Prediction and Pre-Rendering at the Edge to Enable Ultra-Low Latency Mobile 6DoF Experiences

As virtual reality (VR) applications become popular, the desire to enable high-quality, lightweight, and mobile VR can potentially be achieved by performing the VR rendering and encoding computations at the edge and streaming the rendered video to the VR glasses. However, if the rendering has to be performed after the edge gets to know of the user’s new head and body position, the ultra-low latency requirements of VR will not be met by the roundtrip delay. In this article, we introduce edge intelligence, wherein the edge can predict, pre-render and cache the VR video in advance, to be streamed to the user VR glasses as soon as needed. The edge-based predictive pre-rendering approach can address the challenging six Degrees of Freedom (6DoF) VR content. Compared to 360-degree videos and 3DoF (head motion only) VR, 6DoF VR supports both head and body motion, thus not only viewing direction but also viewing position can change. Hence, our proposed VR edge intelligence comprises of predicting both the head and body motions of a user accurately using past head and body motion traces. In this article, we develop a multi-task long short-term memory (LSTM) model for body motion prediction and a multi-layer perceptron (MLP) model for head motion prediction. We implement the deep learning-based motion prediction models and validate their accuracy and effectiveness using a dataset of over 840,000 samples for head and body motion.


State of Energy Prediction in Renewable Energy-driven Mobile Edge Computing using CNN-LSTM Networks

Renewable energy (RE) is a promising solution to save grid power in mobile edge computing (MEC) systems and thus reducing the carbon footprints. However, to effectively operate the RE-based MEC system, a method for predicting the state of energy (SoE) in the battery is essential, not only to prevent the battery from over-charging or over-discharging, but also allowing the MEC applications to adjust their loads in advance based on the energy availability. In this work, we consider RE-powered MEC systems at the Road-side Unit (RSU) and focus on predicting its battery’s SoE by using machine learning technique. We developed a real-world RE-powered RSU testbed consisting of edge computing devices, small cell base station, and solar as well as wind power generators. By operating RE-powered RSU for serving real-world computation task offloading demands, we collect the corresponding data sequences of battery’s SoE and other observable parameters of the MEC systems that impact the SoE. Using a variant of Long Short-term Memory (LSTM) model with additional convolutional layers, we form a CNN-LSTM model which can predict the SoE accurately with very low prediction error. Our results show that CNN-LSTM outperforms other Recurrent Neural Networks (RNN) based models for predicting intra-hour and hour-ahead SoE.


Vehicular and Edge Computing for Emerging Connected and Autonomous Vehicle Applications

To function optimally and efficiently, connected autonomous vehicles must be equipped with both adequate computational resources and computing architectures. In this paper, the researchers determine the critical performance metrics required for connected vehicles, and also empirically demonstrate which factors and metrics satisfy the static and dynamic computing needs for optimal performance. The researchers also examine the tradeoffs related to different offloading strategies and how different edge computing architectures for vehicular computing might be made feasible for lightweight, high-performance and low-power computing paradigms.

 


A 67-μW Ultra-Low Power PVT-Robust MedRadio Transmitter

A 400 MHz narrowband MedRadio transmitter for short-range communication is presented. A new technique for PVT-robust, calibration- and regulation-free synthesis of the RF carrier is reported based on generating poly-phasors at 50 MHz with no power overhead. This is accomplished using a passive polyphase filter directly integrated within a crystal oscillator followed by an 8× edge combiner to synthesize the RF carrier with -109 dBc/Hz phase noise at 100 kHz offset. A dual supply, inverse class-E power amplifier is implemented for high efficiency at low output power (-17.5 dBm). Open-loop operation permits aggressive duty-cycling (< 40 ns start-up time). State-of-the-art ultra-low power is reported from a prototype BPSK transmitter fabricated in 22 nm CMOS FDX when operated from a 0.4/0.2 V supply consuming 67 μW with 27% global efficiency.


A 22.3 nW, 4.55 cm2 Temperature-Robust Wake-up Receiver Achieving a Sensitivity of -69.5 dBm at 9 GHz

This article presents a miniaturized wake-up receiver (WuRX) that is temperature-compensated yet still consumes only 22.3 nW (7.3 nW excluding the temperature compensation blocks) while operating at 9 GHz (X-band). By moving the carrier frequency to 9 GHz and designing a high impedance, passive envelope detector (ED), the transformer size is reduced to 0.02 cm2 while still achieving 13.5 dB of passive RF voltage gain. To further reduce the area, a global common mode feedback (CMFB) technique is utilized across the ED and baseband (BB) amplifier that eliminates the need for off-chip ac-coupling components. Multiple temperature-compensation techniques are proposed to maintain constant bandwidth of the signal path and constant clock frequency. The WuRX was implemented in 65- (RF) and 180-nm (BB) CMOS and achieves −69.5- and −64-dBm sensitivity with and without an antenna, respectively. Importantly, the sensitivity is demonstrated to vary by only 3 dB from −10 to 40 ◦C. This article demonstrates state of-the-art performance for WuRXs operating at >1 GHz while achieving the smallest area and best temperature insensitivity. Index Terms— Near-zero power, temperature compensation, wake-up receivers (WuRXs), X-band.


Using Sensors and Deep Learning to Enable On-Demand Balance Evaluation for Effective Physical Therapy

As part of their treatment, physical therapy patients routinely undergo balance evaluation, which is traditionally performed by a physical therapist in a clinic and can be subjective, inconvenient and time-consuming. This study combines sensors and deep learning to provide an automated balance evaluation system for use in both the clinic and the home. To achieve this, the researchers use a deep learning-based model and a depth camera to estimate the user’s Center of Mass (CoM) position -- a method that outperforms other CoM estimation methods in terms of accuracy and ease-of-use. They then make use of a balance evaluation system to evaluate the subject’s dynamic balance in a Gait Initiation (GI) task. To do this, the subject’s CoM position is estimated by the study’s CoM estimation model, and a Wii balance board is used to measure the Center of Pressure (CoP). The researchers then use CoP-CoM trajectory during the GI task to assess and quantify the patient’s dynamic balance control. The proposed model is able to quantify balance level for both healthy subjects and those with Parkinson’s Disease in a way that is consistent with the human PT’s assessments in traditional balance evaluation tests, making it a portable and low-cost tool for on-demand balance evaluation.

 


Predictive Adaptive Streaming to Enable Mobile 360-degree and VR Experiences

Delivering the kind of truly immersive 360-degree virtual reality experiences that both consumers and enterprise users expect requires ultra-high bandwidth and ultra-low latency networks. Developing these networks for mobile use is an even greater challenge. To optimize mobile VR performance, developers commonly stream only the field of view (FOV) to reduce bandwidth. However, when a user changes head position, the FOV must also change in real time to avoid excessive latency. This paper proposes a predictive adaptive streaming approach, where the predicted view is adaptively encoded at a relatively high quality, given bandwidth conditions, and is transmitted in advance. This leads to a simultaneous reduction in bandwidth and latency. This method is based on a deep-learning-based viewpoint prediction model developed by the researchers, which uses past head motions to predict where a user will be looking in the 360-degree view. The method for high-quality, low latency visualization is validated by way of a very large dataset consisting of head motion traces from more than 36,000 viewers for nineteen 360-degree/VR videos.

 


Predictive Adaptive Streaming to Enable Mobile 360-degree and VR Experiences

As 360-degree videos and virtual reality (VR) applications become popular for consumer and enterprise use cases, the desire to enable truly mobile experiences also increases. Delivering 360-degree videos and cloud/edge-based VR applications require ultra-high bandwidth and ultra-low latency [1], challenging to achieve with mobile networks. A common approach to reduce bandwidth is streaming only the field of view (FOV). However, extracting and transmitting the FOV in response to user head motion can add high latency, adversely affecting user experience. In this paper, we propose a predictive adaptive streaming approach, where the predicted view with high predictive probability is adaptively encoded in relatively high quality according to bandwidth conditions and transmitted in advance, leading to a simultaneous reduction in bandwidth and latency. The predictive adaptive streaming method is based on a deep-learning-based viewpoint prediction model we develop, which uses past head motions to predict where a user will be looking in the 360-degree view. Using a very large dataset consisting of head motion traces from over 36,000 viewers for nineteen 360-degree/VR videos, we validate the ability of our predictive adaptive streaming method to offer high-quality view while simultaneously significantly reducing bandwidth.


A 13.9-nA ECG Amplifier Achieving 0.86/0.99 NEF/PEF Using AC-coupled OTA-Stacking

An ultra-low power electrocardiogram (ECG) recording front-end intended for implantable sensors is presented. The noise-limited, high power first stage of a two stage amplifier utilizes stacking of operational transconductance amplifiers (OTAs) for noise and power efficiency improvements. The proposed technique involves upmodulated/chopped signals being applied to ac-coupled, stacked inverter-based OTAs that inherently sum the individual transconductances while reusing the same current, thereby enhancing the noise efficiency. Two prototype designs were fabricated in a 180-nm CMOS process. The three-stack version consumes 13.2 nW and occupies 0.18 mm2, whereas the five-stack implementation consumes 18.7 nW and occupies 0.24 mm2. State-of-the-art NEF and PEF metrics of less than unity, 0.86 and 0.99, respectively, are reported for the five-stack version. These correspond to ∼3× improvement in terms of energy efficiency compared to prior ultra-low power, sub-100-nW amplifiers.


Head and Body Motion Prediction to Enable Mobile VR Experiences with Low Latency

As virtual reality (VR) and augmented reality (AR) applications grow in popularity across economic sectors, so does the need for high-quality, lightweight and mobile VR. Yet to enable a truly mobile VR experience, mobile head-mounted displays (HMDs) must make use of bandwidth-constrained mobile networks while also satisfying ultra-low latency requirements. To accomplish this, the researchers propose that the heavy computational tasks of video rendering take place in advance in edge/cloud-based Six Degrees of Freedom (6DoF) servers. Both head and body motions and viewing position changes are supported by 6DoF, which allows the system (by way of a deep learning-based model) to accurately and efficiently predict viewing direction and position using past head and body motion traces. This predictive model could feasibly lead to a reduction in latency as the video is streamed to users via lightweight VR glasses. 
 


Offline and Online Learning Techniques for Personalized Blood Pressure Prediction and Health Behavior Recommendations

Although blood pressure (BP) is one of the chief indicators of human health and is highly correlated to various health behaviors such as sleep and exercise, little is known about how each of these health behaviors may affect any given individual’s BP. For this study, the researchers hypothesized they could predict an individual’s BP using health behavior and historical BP, and then identify the most important factors in BP prediction for that individual. This information could then be used to provide personal health behavior recommendations to improve and manage BP. Using data collected from off-the-shelf wearable devices and wireless home BP monitors, the researchers investigated the relationship between BP and health behaviors using a personalized BP model based on a random forest model with Feature Selection (RFFS). To enhance RFFS and account for problematic concept drifts and anomaly points, the RFFS model was paired with an Online Weighted Resampling technique. The experimental results demonstrate that the RFFS/OWR prediction model achieves a lower prediction error when compared with existing machine-learning methods. 
 


Joint Vessel Segmentation and Deformable Registration on Multi-Modal Retinal Images Based on Style Transfer

Diagnosing ophthalmologic diseases in patients by way of image analysis typically requires capturing, collecting and aligning retinal images from multiple modalities, each of which conveys complementary information. This task is complicated, however, by two major algorithmic challenges: 1) inconsistent features from each modality make it difficult for algorithms to find mutual information between two modalities and 2) most data lacks labels necessary for training learning-based models. This study proposes a combined vessel segmentation and deformable registration model based on a Convolutional Neural Network (CNN) to achieve this task. The vessel segmentation network is trained without ground truth using a learning scheme based on style transfer to extract mutual patterns and transform images of different modalities into consistent representations. This network is paired with a deformable registration model, which uses content loss to help find dense correspondences between multi-modal images based on their consistent representations, while also improving segmentation results. This model is demonstrably better than other comparable models at both deformable and rigid transformation tasks. 
 


High ISO JPEG Image Denoising by Deep Fusion of Collaborative and Convolutional Filtering

Photographers typically use a high ISO mode on their cameras to capture fast-moving objects, to record details in low-light environments and to avoid blurry images when taking photos without tripods. However, a high ISO also introduces much realistic “noise” to the image, which is difficult to remove using traditional denoising methods. This study proposes a novel denoising method for high ISO JPEG images by fusing collaborative and convolutional filtering. Collaborative filtering — which is effective for recovering repeatable structures —- is performed by first estimating the noise variance according to the Bayer pattern of noise variance maps. Convolutional filtering — which is effective for recovering irregular patterns —- is achieved by using a Convolutional Neural Network (CNN) to remove the noise. The two results are then combined to generate the final denoising via a proposed deep CNN. Experimental results show that this method outperforms state-of-the-art realistic noise removal methods. The study also developed a dataset with noisy and clean image pairs for high ISO JPEG images to promote further research on this topic. 
 


Sustainable Vehicular Edge Computing Using Local and Solar-Powered Roadside Unit Resources

Evolving in tandem with the computing and communication needs of vehicles is the need to technologically support these vehicles in a way that is both sustainable and also minimizes Quality of Service (QoS) loss. This study explores the use of Solar-powered Roadside Units (SRSUs), which consist of a small cell base station (SBS) and a road-edge computing (REC) node to which vehicles can offload various application tasks at high throughput and low latency. To mitigate any QoS loss inherent to using a low-cost solar system, REC or limited bandwidth resources, the researchers propose a dynamic offloading approach whereby the different subtasks of a vehicle application are optimally processed -- either locally using the vehicle’s own computer resources, or remotely using the REC resource (taking into consideration the energy, computing or bandwidth constraints of the SRSU network). This is achieved using a heuristic algorithm that jointly makes the optimal user association, offloads subtasks and allocates SRSU network resources. The results of a simulation framework used to test the algorithm demonstrate that the proposed approach can significantly reduce QoS compared with other best effort strategies.


Towards and On-Demand Virtual Physical Therapist: Machine Learning-Based Patient Action Understanding, Assessment and Task Recommendation

This study proposes a machine-learning based on-demand virtual physical therapy (PT) system to enable patients with Parkinson’s Disease to improve balance and mobility. As patients engage in three PT tasks with varying levels of difficulty, their movements are captured by a Kinect sensor and are automatically evaluated using criteria carefully designed by a PT co-author. The patient’s motion data is then used to propose a two-phase human action understanding algorithm TPHAU to understand the patient’s movements, as well as an error identification model to identify the patient’s movement errors. To emulate a real physical therapist’s guidance, automated, personalized and timely task recommendations are then made using a machine-learning-based model trained from real patient and PT data. Experiments show that the proposed methods are highly accurate in terms of helping patients understand PT actions, identifying errors and recommending tasks. 


Personalized Blood Pressure Estimation using Photoplethysmography and Wavelet Decomposition

The most important indicator of potential cardiovascular disease is blood pressure (BP), yet traditional cuff-based methods for measuring BP can lead to inaccurate measurements. These methods are, moreover, not practical for the continuous monitoring required for a personalized approach to detecting abnormal BP fluctuations. This study proposes a novel machine-learning model for estimating BP using a subject’s previous BP measurements combined with wavelet decomposition to extract features from the photoplethysmogram (PPG) signal, a simple and popular tool for non-invasive diagnosis. To process the arterial BP time series, the researchers use an exponentially weighted moving average (EMWA) and a peak direction technique to derive systolic blood pressure (SBP), diastolic blood pressure (DBP) and their corresponding trends. They then construct a predictive model based on these features using a random forest model, as well as the MIMIC dataset to analyze and compare the results with other BP estimation methods. The proposed approach demonstrates a smaller estimation error than the cuff-based standard and all other methods studied, with mean average errors for SBP and DBP equal to 3.43 and 1.73 respectively.
 


Center of Mass Estimation for Balance Evaluation Using Convolutional Neural Networks

The Center of Mass (CoM) calculation is an important tool for evaluating a person’s balance and predicting fall risk (particularly as it applies to the effectiveness of physical rehabilitation programs). Traditional techniques for measuring the CoM position include laboratory-grade devices such as a force plate, which are expensive and inconvenient for home use. Inspired by the rapid development of vision-based techniques, this paper proposes a deep learning-based framework that uses a single depth camera to measure the CoM position. The framework estimates the horizontal CoM position of a subject using body parameters obtained from  depth image data collected from multiple subjects in various postures. The proposed approach has proven to be highly accurate in estimating the CoM of existing subjects or a new subject and does not require the complicated calibration or subject identification characteristic of existing CoM techniques. It thereby creates a portable and low-cost alternative for enabling automated balance evaluation at home.
 


DC-60 GHz I/Q Modulator in 45 nm SOI CMOS for Ultra-Wideband 5G Radios

The continuing proliferation of wireless electronic devices, coupled with the promise of fifth generation mobile networks (5G) and Internet-of-Things (IoT) scale connectivity, will demand innovative design techniques and solutions on all network and device layers for both wireless and optical systems. Broadband and software-defined connectivity is at the forefront of research efforts to address these new challenges. The work presented in this paper explores the limits of current CMOS technology with the goal of achieving a true DC-100 GHz software-defined transmitter front-end, and with the maximum achievable instantaneous bandwidth. This paper presents a DC-60 GHz I/Q modulator/transmitter chip in 45 nm SOI CMOS, that can serve as a critical building block for next generation multi-standard and high-capacity wireless backhaul links. This new transmitter will address new applications, such as short-range device-to-device communications, server-to-server connectivity in data centers, and fifth generation mm-wave software-defined transceivers, while still supporting traditional mobile links and connectivity below 6 GHz.


Personalized Effect of Health Behavior on Blood Pressure: Machine Learning Based Prediction and Recommendation

Blood pressure (BP) is one of the most important indicator of human health. In this paper, we investigate the relationship between BP and health  behavior  (e.g.  sleep  and exercise). Using the data collected from off-the-shelf wearable devices and wireless home BP monitors, we propose a data driven personalized model to predict daily BP level and provide actionable insight  into  health behavior and daily BP. In the proposed machine learning mode l using Random Forest (RF), trend and periodicity features of BP time-series are extracted to improve  prediction. To further enhance the performance of the prediction model, we propose RF with Feature Selection (RFFS), which performs RF-based feature selection to filter out unnecessary features. Our experimental results demonstrate that the proposed approach is robust to different individuals and has smaller prediction error than existing methods. We also validate the effectiveness of personalized recommendation of health behavior generated by RFFS model.


Predictive View Generation to Enable Mobile 360-degree and VR Experiences

As 360-degree videos and virtual reality (VR) applications become popular for consumer and enterprise use cases, the desire to enable truly mobile experiences also increases. Delivering 360-degree videos and cloud/edge-based VR applications require ultra-high bandwidth and ultra-low latency [22], challenging to achieve with mobile networks. A common approach to reduce bandwidth is streaming only the field of view (FOV). However, extracting and transmitting the FOV in response to user head motion can add high latency, adversely affecting user experience. In this paper, we propose a predictive view generation approach, where only the predicted view is extracted (for 360-degree video) or rendered (in case of VR) and transmitted in advance, leading to a simultaneous reduction in bandwidth and latency. The view generation method is based on a deep-learning-based viewpoint prediction model we develop, which uses past head motions to predict where a user will be looking in the 360-degree view. Using a very large dataset consisting of head motion traces from over 36,000 viewers for nineteen 360-degree/VR videos, we validate the ability of our viewpoint prediction model and predictive view generation method to offer very high accuracy while simultaneously significantly reducing bandwidth.


Quality of Service Optimization for Vehicular Edge Computing with Solar-Powered Road Side Units

We explore the viability of Solar-powered Road Side Units (SRSU), consisting of small cell base stations and Mobile Edge Computing (MEC) servers, and powered solely by solar panels with battery, to provide connected vehicles with a low- latency, easy-to-deploy and energy-efficient communication and edge computing infrastructure. However, SRSU may entail a high risk of power deficiency, leading to severe Quality of Service (QoS) loss due to spatial and temporal fluctuation of solar power generation. Meanwhile, the data traffic demand also varies with space and time. The mismatch between solar power generation and SRSU power consumption makes optimal use of solar power challenging. In this paper, we model the above problem with three sub-problems, the SRSU power consumption minimization problem, the temporal energy balancing problem and spatial energy balancing problem. Three algorithms are proposed to solve the above sub-problems, and they together provide a complete joint battery charging and user association control algorithm to minimize the QoS loss under delay constraint of the computing tasks. Results with a simulated urban environment using actual solar irradiance and vehicular traffic data demonstrates that the proposed solution reduces the QoS loss significantly compared to greedy approaches.


Human Action Understanding and Movement Error Identification for the Treatment of Patients with Parkinson’s Disease

Traditional  physical  therapy  treatment  for  patients with  Parkinson’s  disease  (PD)  requires  regular  visits  with  the physical therapist (PT), which may be expensive and inconvenient. In this paper, we propose a learning-based personalized treatment system to enable home-based training for PD patients. It uses the Kinect sensor to monitor the patient’s movements at home. Three physical therapy tasks with multiple difficulty levels are selected by our PT co-author to help PD patients improve balance and mobility. Criteria for each  task  are  carefully  designed  such  that patient’s   performance can be automatically evaluated by the proposed system. Given the patient’s motion data, we propose a two-phase human action   understanding algorithm  TPHAU to understand the patient’s movements. To evaluate patient performance, we use Support Vector Machine to identify the patient’s error in performing the task. Therefore, the patient’s error can be reported to the PT,  who can remotely supervise the patient’s  performance and conformance on the training tasks. Moreover, the PT can update the tasks that the patient should perform through the cloud-based  platform in a timely manner, which enables personalized treatment for the patient. To validate the proposed approach, we have collected data from PD patients in the clinic. Experiments on real patient data show that the proposed methods  can  accurately  understand  patient’s  actions  and  identify patient’s movement error in performing the task.


Novel Hybrid-Cast Approach to Reduce Bandwidth and Latency for Cloud-Based Virtual Space
We explore the possibility of enabling cloud-based virtual space applications for better computational scalability and easy access from any end device, including future lightweight wireless head-mounted displays. In particular, we investigate virtual space applications such as virtual classroom and virtual gallery, in which the scenes and activities are rendered in the cloud, with multiple views captured and streamed to each end device. A key challenge is the high bandwidth requirement to stream all the user views, leading to high operational cost and potential large delay in a bandwidth-restricted wireless network. We propose a novel hybrid-cast approach to save bandwidth in a multi-user streaming scenario. We identify and broadcast the common pixels shared by multiple users, while unicasting the residual pixels for each user. We formulate the problem of minimizing the total bitrate needed to transmit the user views using hybrid-casting and describe our approach. A common view extraction approach and a smart grouping algorithm are proposed and developed to achieve our hybrid-cast approach. Simulation results show that the hybrid-cast approach can significantly reduce total bitrate by up to 55% and avoid congestion-related latency, compared to traditional cloud-based approach of transmitting all the views as individual unicast streams, hence addressing the bandwidth challenges of the cloud, with additional benefits in cost and delay.