Selected Publications

We present a degradation-aware vision-language model (DA-CLIP) as a multi-task framework for image restoration. DA-CLIP trains an additional controller that adapts the fixed CLIP image encoder to predict high-quality feature embeddings. By integrating the embedding into an image restoration network via cross-attention, we are able to pilot the model to learn a high-fidelity image reconstruction. The controller itself also outputs a degradation feature that matches the real corruptions of the input, yielding a natural classifier for different degradation types. Our approach advances state-of-the-art performance on both degradation-specific and unified image restoration tasks.
ICLR, 2024

We propose a benchmark for testing the reliability of regression uncertainty estimation methods under real-world distribution shifts. It consists of 8 image-based regression datasets with different types of challenging distribution shifts. We use our benchmark to evaluate many of the most common uncertainty estimation methods, as well as two state-of-the-art uncertainty scores from OOD detection. While methods are well calibrated when there is no distribution shift, they all become highly overconfident on many of the benchmark datasets. This uncovers important limitations of current uncertainty estimation methods, and our benchmark thus serves as a challenge to the research community.
TMLR, 2023

We present a stochastic differential equation (SDE) approach for general-purpose image restoration. The key construction is a mean-reverting SDE that models the degradation process from high-quality image to low-quality counterpart. By simulating the corresponding reverse-time SDE, high-quality images can then be restored. We also propose a maximum likelihood objective that stabilizes the training and improves the restoration results. Our method achieves highly competitive performance on the tasks of image deraining, deblurring and denoising. The general applicability is further demonstrated via qualitative results on image super-resolution, inpainting and dehazing.
ICML, 2023

Imbalances of the electrolyte concentration levels in the body can lead to catastrophic consequences, but accurate and accessible measurements could improve patient outcomes. While blood tests provide accurate measurements, they are invasive and the laboratory analysis can be slow or inaccessible. In contrast, an ECG is a widely adopted tool which is quick and simple to acquire. However, the problem of estimating continuous electrolyte concentrations directly from ECGs is not well-studied. We therefore investigate if regression methods can be used for ECG-based prediction of electrolyte concentrations.
Preprint, 2022

We derive an efficient and convenient objective that can be employed to train a parameterized distribution q(y|x; phi) by directly minimizing its KL divergence to a conditional EBM p(y|x; theta). We then employ the proposed objective to jointly learn an effective MDN proposal distribution during EBM training, thus addressing the main practical limitations of energy-based regression. Furthermore, we utilize our derived training objective to learn MDNs with a jointly trained energy-based teacher, consistently outperforming conventional MDN training on four real-world regression tasks within computer vision.
AISTATS, 2022

We apply energy-based models p(y|x; theta) to the task of 3D bounding box regression, extending the recent energy-based regression approach from 2D to 3D object detection. This is achieved by designing a differentiable pooling operator for 3D bounding boxes y, and adding an extra network branch to the state-of-the-art 3D object detector SA-SSD. We evaluate our proposed detector on the KITTI dataset and consistently outperform the SA-SSD baseline, demonstrating the potential of energy-based models for 3D object detection.
CVPR Workshops, 2021

We propose a simple yet highly effective extension of noise contrastive estimation (NCE) to train energy-based models p(y|x; theta) for regression tasks. Our proposed method NCE+ can be understood as a direct generalization of NCE, accounting for noise in the annotation process of real-world datasets. We provide a detailed comparison of NCE+ and six popular methods from literature, the results of which suggest that NCE+ should be considered the go-to training method. We also apply NCE+ to the task of visual tracking, achieving state-of-the-art performance on five commonly used datasets. Notably, our tracker achieves 63.7% AUC on LaSOT and 78.7% Success on TrackingNet.
BMVC, 2020

We propose a general and conceptually simple regression method with a clear probabilistic interpretation. We create an energy-based model of the conditional target density p(y|x), using a deep neural network to predict the un-normalized density from the input-target pair (x,y). This model of p(y|x) is trained by directly minimizing the associated negative log-likelihood, approximated using Monte Carlo sampling. Notably, our model achieves a 2.2% AP improvement over Faster-RCNN for object detection on the COCO dataset, and sets a new state-of-the-art on visual tracking when applied for bounding box regression.
ECCV, 2020

We propose a comprehensive evaluation framework for scalable epistemic uncertainty estimation methods in deep learning. It is specifically designed to test the robustness required in real-world computer vision applications. We also apply our proposed framework to provide the first properly extensive and conclusive comparison of the two current state-of-the-art scalable methods: ensembling and MC-dropout. Our comparison demonstrates that ensembling consistently provides more reliable and practically useful uncertainty estimates.
CVPR Workshops, 2020

Publications

Controlling Vision-Language Models for Multi-Task Image Restoration
ICLR, 2024

arXiv Code Project OpenReview

Refusion: Enabling Large-Size Realistic Image Restoration with Latent-Space Diffusion Models
CVPR Workshops, 2023

arXiv Code

Image Restoration with Mean-Reverting Stochastic Differential Equations
ICML, 2023

arXiv Code Project

ECG-Based Electrolyte Prediction: Evaluating Regression and Probabilistic Methods
Preprint, 2022

arXiv Code

Learning Proposals for Practical Energy-Based Regression
AISTATS, 2022

arXiv Code Poster Slides

Uncertainty-Aware Body Composition Analysis with Deep Regression Ensembles on UK Biobank MRI
Computerized Medical Imaging and Graphics, 2021

arXiv Code

Deep Energy-Based NARX Models
SYSID, 2021

arXiv Code Slides

Accurate 3D Object Detection using Energy-Based Models
CVPR Workshops, 2021

arXiv Code Poster Video Slides

How to Train Your Energy-Based Model for Regression
BMVC, 2020

arXiv Code Video (1.5 min) Slides (1.5 min)

Energy-Based Models for Deep Probabilistic Regression
ECCV, 2020

arXiv Code Video (1 min) Slides (1 min) Video Slides

Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
CVPR Workshops, 2020

arXiv Code Poster Video Slides

Automotive 3D Object Detection Without Target Domain Annotations
Master of Science Thesis in Electrical Engineering, 2018

Link to paper Code Video Slides

Teaching

Uppsala University

Linköping University

Academic Service

Reviewing

80 papers in total.

Talks

Invited Talks

  • How Reliable is Your Regression Model’s Uncertainty Under Real-World Distribution Shifts?
    RISE Learning Machines Seminars | Online | [slides]
    March 21, 2024

  • How Reliable is Your Regression Model’s Uncertainty Under Real-World Distribution Shifts?
    DFKI Augmented Vision Workshop | Online | [slides]
    October 31, 2023

  • Accurate 3D Object Detection using Energy-Based Models
    Zenseact | Online | [slides]
    January 29, 2021

  • Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
    Zenuity | Gothenburg, Sweden | [slides]
    June 18, 2019

Other Presentations

  • Towards Accurate and Reliable Deep Regression Models
    PhD defense | Uppsala, Sweden | [slides] [video]
    November 30, 2023

  • Some Advice for New (and Old?) PhD Students
    SysCon μ seminar at our weekly division meeting | Uppsala, Sweden | [slides]
    March 16, 2023

  • Can You Trust Your Regression Model’s Uncertainty Under Distribution Shifts?
    SysCon μ seminar at our weekly division meeting | Uppsala, Sweden | [slides]
    September 15, 2022

  • Energy-Based Probabilistic Regression in Computer Vision
    Half-time seminar | Online | [slides]
    February 3, 2022

  • Regression using Energy-Based Models and Noise Contrastive Estimation
    SysCon μ seminar at our weekly division meeting | Online | [slides]
    February 12, 2021

  • Semi-Flipped Classroom with Scalable-Learning and CATs
    Pedagogical course project presentation | Uppsala, Sweden | [slides]
    December 18, 2019

  • Deep Conditional Target Densities for Accurate Regression
    SysCon μ seminar at our weekly division meeting | Uppsala, Sweden | [slides]
    November 1, 2019

  • Predictive Uncertainty Estimation with Neural Networks
    SysCon μ seminar at our weekly division meeting | Uppsala, Sweden | [slides]
    March 22, 2019

Reading

Papers

I categorize, annotate and write comments for all research papers I read, and share this publicly on GitHub (380+ papers since September 2018). Feel free to reach out with any questions or suggested reading. In June 2023, I also wrote the blog post The How and Why of Reading 300 Papers in 5 Years about this.

From 2018 to 2023, I organized the SysCon machine learning reading group.

Books

I have also started to really enjoy reading non-technical books, e.g. about ethics and political philosophy. Since late 2022, I have read the following books:

30 books in total.

Blog Posts

More Posts

In 2023, I read 87 papers and 26 non-technical books. 87 papers is slightly more than my previous record (82 papers in 2022), and I’ve never even been remotely close to reading 26 books in a year. Deciding to read more books is definitely…

CONTINUE READING

Since I started my PhD almost five years ago, I have categorized, annotated and written short comments for all research papers I read in detail. I share this publicly in a GitHub repository, and recently reached 300 read papers. To mark this milestone, I decided to share some thoughts on why I think it’s important to read a lot of papers, and how I organize my reading. I also compiled some paper statistics, along with a list of 30 papers that I found particularly interesting…

CONTINUE READING

We have created a video in which we try to explain how machine learning works and how it can be used to help doctors. The explanation is tailored to grade 7-9 students, and the idea is that you only should need to know about basic linear functions (straight lines) to understand everything.

CONTINUE READING

When I first got interested in deep learning a couple of years ago, I started out using TensorFlow. In early 2018 I then decided to switch to PyTorch, a decision that I’ve been very happy with ever since…

CONTINUE READING

Running

During the years of my PhD, running turned into an important part of my life, crucial in order to keep me productive and in a good mental state throughout the work days and weeks. I’m a relatively serious runner, but I run mostly just because it’s a lot of fun and a great way to explore your surroundings, and because it’s good for both my physical and mental health. My training can be followed on Strava.

From Sep 10 2020 until Dec 31 2023, I was on a run streak (running at least 2 km outside every day) of 1208 days. I started a new modified run/walk streak on Jan 2 2024 (just walking 1 km outside is fine on proper sick days).

Personal Bests

  • 10 km: 34:44 (3:28 min/km) | Bålsta, 23-04-29 | [Strava]
  • Half Marathon: 1:19:06 (3:45 min/km) | Uppsala, 23-10-28 | [Strava]

Stats by Year

  • 2023
    • Distance: 4,277.2 km (daily avg: 11.7 km | weekly avg: 82.0 km)
    • Elevation gain: 37,028 m
  • 2022
    • Distance: 3,871.5 km (daily avg: 10.6 km | weekly avg: 74.2 km)
    • Elevation gain: 19,657 m
  • 2021
    • Distance: 3,244.7 km (daily avg: 8.9 km | weekly avg: 62.2 km)
    • Elevation gain: 16,349 m
  • 2020
    • Distance: 3,593.1 km (daily avg: 9.8 km | weekly avg: 68.9 km)
    • Elevation gain: 24,089 m
  • 2019
    • Distance: 1,604.4 km
    • Elevation gain: 12,634 m

Coursework

Uppsala University

81.5 credits in total.

Stanford University

  • CS 229 | Machine Learning | 3 Units
  • EE 263 | Introduction to Linear Dynamical Systems | 3 Units
  • EE 278 | Introduction to Statistical Signal Processing | 3 Units
  • EE 310 | Ubiquitous Sensing, Computing and Communication Seminar | 1 Unit
  • AA 274 | Principles of Robotic Autonomy | 3 Units
  • CS 224N | Natural Language Processing with Deep Learning | 3 Units
  • EE 373A | Adaptive Signal Processing | 3 Units
  • EE 203 | The Entrepreneurial Engineer | 1 Unit
  • AA 203 | Introduction to Optimal Control and Dynamic Optimization | 3 Units
  • AA 273 | State Estimation and Filtering for Aerospace Systems | 3 Units
  • CS 547 | Human-Computer Interaction Seminar | 1 Unit
  • EE 380 | Colloquium on Computer Systems | 1 Unit
  • MS&E 472 | Entrepreneurial Thought Leaders’ Seminar | 1 Unit

29 units (58 credits) in total.

Linköping University

  • TSEA51 | Switching Theory and Logical Design | 4 Credits
  • TATM79 | Foundation Course in Mathematics | 6 Credits
  • TFYY51 | Engineering Project | 6 Credits
  • TATA24 | Linear Algebra | 8 Credits
  • TATA41 | Calculus in One Variable 1 | 6 Credits
  • TATA42 | Calculus in One Variable 2 | 6 Credits
  • TATA40 | Perspectives on Mathematics | 1 Credit
  • TATA14 | The Language of Mathematics | 4 Credits
  • TFYA10 | Wave Motion | 8 Credits
  • TFFM12 | Perspectives on Physics | 2 Credits
  • TATA43 | Calculus in Several Variables | 8 Credits
  • TDDC74 | Programming: Abstraction and Modelling | 8 Credits
  • TSRT04 | Introduction in Matlab | 2 Credits
  • TATA44 | Vector Analysis | 4 Credits
  • TANA21 | Scientific Computing | 6 Credits
  • TSTE05 | Electronics and Measurement Technology | 8 Credits
  • TATA34 | Real Analysis, Honours Course | 6 Credits
  • TMME12 | Engineering Mechanics Y | 4 Credits
  • TATA45 | Complex Analysis | 6 Credits
  • TMME04 | Engineering Mechanics II | 6 Credits
  • TAOP07 | Introduction to Optimization | 6 Credits
  • TATA53 | Linear Algebra, Honours Course | 6 Credits
  • TAMS14 | Probability, First Course | 4 Credits
  • TSEA28 | Computer Hardware and Architecture Y | 6 Credits
  • TFYA13 | Electromagnetic Field Theory | 8 Credits
  • TATA77 | Fourier Analysis | 6 Credits
  • TAMS24 | Statistics, First Course | 4 Credits
  • TSDT18 | Signals and Systems | 6 Credits
  • TFYA12 | Thermodynamics and Statistical Mechanics | 6 Credits
  • TATM85 | Functional Analysis | 6 Credits
  • TDDC76 | Programming and Data Structures | 8 Credits
  • TSRT12 | Automatic Control Y | 6 Credits
  • TFYA73 | Modern Physics I | 4 Credits
  • TSEA56 | Electronics Engineering - Bachelor Project | 16 Credits
  • TATA66 | Fourier and Wavelet Analysis | 6 Credits
  • TSKS10 | Signals, Information and Communication | 4 Credits
  • TEAE01 | Industrial Economics, Basic Course | 6 Credits
  • TSRT62 | Modelling and Simulation | 6 Credits
  • TSRT10 | Automatic Control - Project Course | 12 Credits
  • TGTU49 | History of Technology | 6 Credits
  • TSEA81 | Computer Engineering and Real-time Systems | 6 Credits
  • TQET33 | Degree Project - Master’s Thesis | 30 Credits

277 credits in total.

Student Projects

Semantic Segmentation for Autonomous Driving.

Website Aiming to Increase Interest in Higher Education Among Youths.

Autonomous/Web Controlled TurtleBot3.

Autonomous Minesweeping System.

TensorFlow Implementation of SqueezeDet.

Autonomous/Web Controlled RC Car.

Deep Learning Demo/Test Platform.

The SE-Sync Algorithm for Pose-Graph SLAM.

Neural Image Captioning for Intelligent Vehicle-to-Passenger Communication.

Control of an Inverted Double Pendulum using Reinforcement Learning.

Web Tool for Analysis and Visualization of Sensor Data.

Autonomous/Web Controlled Raspberry Pi & Arduino Robot.

2D Adventure Game.