forked from JonasGeiping/jonasgeiping.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.json
More file actions
1 lines (1 loc) · 28.3 KB
/
index.json
File metadata and controls
1 lines (1 loc) · 28.3 KB
1
[{"authors":["admin"],"categories":null,"content":"Hello, I\u0026rsquo;m Jonas . I conduct research in computer science as PhD student at the University of Siegen. My background is in Mathematics, more specifically in mathematical optimization and I am interested in research that intersects current deep learning and mathematical optimization, with my main area of applications being computer vision.\n","date":-62135596800,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":-62135596800,"objectID":"2525497d367e79493fd32b198b28f040","permalink":"https://jonasgeiping.github.io/author/jonas-geiping/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/author/jonas-geiping/","section":"authors","summary":"Hello, I\u0026rsquo;m Jonas . I conduct research in computer science as PhD student at the University of Siegen. My background is in Mathematics, more specifically in mathematical optimization and I am interested in research that intersects current deep learning and mathematical optimization, with my main area of applications being computer vision.","tags":null,"title":"Jonas Geiping","type":"authors"},{"authors":[],"categories":null,"content":"","date":1599436800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1599436800,"objectID":"c508d3b90d72b37301666273fda312fc","permalink":"https://jonasgeiping.github.io/talk/bmvc2020/","publishdate":"2020-09-07T00:00:00Z","relpermalink":"/talk/bmvc2020/","section":"talk","summary":"Presented ”Fast Convex Relaxations via Graph Discretizations” as oral presentation at BMVC 2020.","tags":[],"title":"31st British Machine Vision Conference (BMVC 2020)","type":"talk"},{"authors":["Jonas Geiping","Fjedor Gaede","Hartmut Bauermeister","Michael Moeller"],"categories":[],"content":"","date":1598918400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541254,"objectID":"11f3aba5e9d82e7f0c239c720cb45796","permalink":"https://jonasgeiping.github.io/publication/geiping-fast-2020-1/","publishdate":"2020-09-19T18:47:34.637159Z","relpermalink":"/publication/geiping-fast-2020-1/","section":"publication","summary":"Matching and partitioning problems are fundamentals of computer vision applications with examples in multilabel segmentation, stereo estimation and optical-flow computation. These tasks can be posed as non-convex energy minimization problems and solved near-globally optimal by recent convex lifting approaches. Yet, applying these techniques comes with a significant computational effort, reducing their feasibility in practical applications. We discuss spatial discretization of continuous partitioning problems into a graph structure, generalizing discretization onto a Cartesian grid. This setup allows us to faithfully work on super-pixel graphs constructed by SLIC or Cut-Pursuit, massively decreasing the computational effort for lifted partitioning problems compared to a Cartesian grid, while optimal energy values remain similar: The global matching is still solved near-globally optimal. We discuss this methodology in detail and show examples in multi-label segmentation by minimal partitions and stereo estimation, where we demonstrate that the proposed graph discretization can reduce runtime as well as memory consumption of convex relaxations of matching problems by up to a factor of 10.","tags":["\"Computer Science - Computer Vision and Pattern Recognition\"","\"Mathematics - Optimization and Control\""],"title":"Fast Convex Relaxations Using Graph Discretizations","type":"publication"},{"authors":["Jonas Geiping","Liam Fowl","W. Ronny Huang","Wojciech Czaja","Gavin Taylor","Michael Moeller","Tom Goldstein"],"categories":[],"content":"","date":1598918400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541255,"objectID":"9447d558593b2047f0d5bf11443b5a67","permalink":"https://jonasgeiping.github.io/publication/geiping-witches-2020/","publishdate":"2020-09-19T18:47:35.03237Z","relpermalink":"/publication/geiping-witches-2020/","section":"publication","summary":"Data Poisoning attacks involve an attacker modifying training data to maliciouslycontrol a model trained on this data. Previous poisoning attacks against deep neural networks have been limited in scope and success, working only in simplified settings or being prohibitively expensive for large datasets. In this work, we focus on a particularly malicious poisoning attack that is both \\\"from scratch\\\" and\\\"clean label\\\", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data. The central mechanism of this attack is matching the gradient direction of malicious examples. We analyze why this works, supplement with practical considerations. and show its threat to real-world practitioners, finding that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset. Finally we demonstrate the limitations of existing defensive strategies against such an attack, concluding that data poisoning is a credible threat, even for large-scale deep learning systems.","tags":["\"Computer Science - Computer Vision and Pattern Recognition\"","\"Computer Science - Machine Learning\""],"title":"Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching","type":"publication"},{"authors":[],"categories":null,"content":"","date":1593561600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1593561600,"objectID":"1db25fb35c3cdebf06f20e89ac8c71c3","permalink":"https://jonasgeiping.github.io/talk/siam2020/","publishdate":"2020-07-01T00:00:00Z","relpermalink":"/talk/siam2020/","section":"talk","summary":"Talked about at the SIAM MDS2020 minisymposium ”Learning parameterized energy minimization models” about ”Parametric Majorization for Data-Driven Energy Minimization Methods”.","tags":[],"title":"SIAM Conference on Mathematics of Data Science 2020","type":"talk"},{"authors":["Ping-Yeh Chiang","Jonas Geiping","Micah Goldblum","Tom Goldstein","Renkun Ni","Steven Reich","Ali Shafahi"],"categories":[],"content":"","date":1588291200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541254,"objectID":"25a1e8832db3e4f1644759c8e16e443e","permalink":"https://jonasgeiping.github.io/publication/chiang-witchcraft-2020/","publishdate":"2020-09-19T18:47:34.424726Z","relpermalink":"/publication/chiang-witchcraft-2020/","section":"publication","summary":"State-of-the-art adversarial attacks on neural networks use expensive iterative methods and numerous random restarts from different initial points. Iterative FGSM-based methods without restarts trade off performance for computational efficiency because they do not adequately explore the image space and are highly sensitive to the choice of step size. We propose a variant of Projected Gradient Descent (PGD) that uses a random step size to improve performance without resorting to expensive random restarts. Our method, Wide Iterative Stochastic crafting (WITCHcraft), achieves results superior to the classical PGD attack on the CIFAR-10 and MNIST data sets but without additional computational cost. This simple modification of PGD makes crafting attacks more economical, which is important in situations like adversarial training where attacks need to be crafted in real time.","tags":["\"Adversarial\"","\"adversarial attacks\"","\"Attack\"","\"CIFAR\"","\"classical PGD attack\"","\"CNN\"","\"gradient methods\"","\"iterative FGSM-based methods\"","\"iterative methods\"","\"learning (artificial intelligence)\"","\"neural nets\"","\"neural networks\"","\"PGD\"","\"PGD attacks\"","\"projected gradient descent\"","\"random step size\"","\"stochastic processes\"","\"wide iterative stochastic crafting\"","\"witchcraft\""],"title":"Witchcraft: Efficient PGD Attacks with Random Step Size","type":"publication"},{"authors":[],"categories":null,"content":"","date":1585699200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1585699200,"objectID":"d3299ad91945ca4df0b73eb439b85fa9","permalink":"https://jonasgeiping.github.io/talk/iclr2020/","publishdate":"2020-04-01T00:00:00Z","relpermalink":"/talk/iclr2020/","section":"talk","summary":"Discussed our oral presentation ”Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory”.","tags":[],"title":"Eighth International Conference on Learning Representations (ICLR 2020)","type":"talk"},{"authors":["W. Ronny Huang","Jonas Geiping","Liam Fowl","Gavin Taylor","Tom Goldstein"],"categories":[],"content":"","date":1585699200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541255,"objectID":"56a7e75230c57dedf0c4f48547f8b728","permalink":"https://jonasgeiping.github.io/publication/huang-metapoison-2020/","publishdate":"2020-09-19T18:47:35.248442Z","relpermalink":"/publication/huang-metapoison-2020/","section":"publication","summary":"Data poisoning--the process by which an attacker takes control of a model by making imperceptible changes to a subset of the training data--is an emerging threat in the context of neural networks. Existing attacks for data poisoning have relied on hand-crafted heuristics. Instead, we pose crafting poisons more generally as a bi-level optimization problem, where the inner level corresponds to training a network on a poisoned dataset and the outer level corresponds to updating those poisons to achieve a desired behavior on the trained model. We then propose MetaPoison, a first-order method to solve this optimization quickly. MetaPoison is effective: it outperforms previous clean-label poisoning methods by a large margin under the same setting. MetaPoison is robust: its poisons transfer to a variety of victims with unknown hyperparameters and architectures. MetaPoison is also general-purpose, working not only in fine-tuning scenarios, but also for end-to-end training from scratch with remarkable success, e.g. causing a target image to be misclassified 90% of the time via manipulating just 1% of the dataset. Additionally, MetaPoison can achieve arbitrary adversary goals not previously possible--like using poisons of one class to make a target image don the label of another arbitrarily chosen class. Finally, MetaPoison works in the real-world. We demonstrate successful data poisoning of models trained on Google Cloud AutoML Vision. Code and premade poisons are provided at https://github.com/wronnyhuang/metapoison","tags":["\"Computer Science - Artificial Intelligence\"","\"Computer Science - Computer Vision and Pattern Recognition\"","\"Computer Science - Cryptography and Security\"","\"Computer Science - Machine Learning\"","\"Statistics - Machine Learning\""],"title":"MetaPoison: Practical General-Purpose Clean-Label Data Poisoning","type":"publication"},{"authors":["Micah Goldblum","Jonas Geiping","Avi Schwarzschild","Michael Moeller","Tom Goldstein"],"categories":[],"content":"","date":1585699200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541255,"objectID":"84223e74de49a26ee95ab5a09094b71a","permalink":"https://jonasgeiping.github.io/publication/goldblum-truth-2020/","publishdate":"2020-09-19T18:47:35.100817Z","relpermalink":"/publication/goldblum-truth-2020/","section":"publication","summary":"We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.","tags":[],"title":"Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory","type":"publication"},{"authors":["Jonas Geiping","Hartmut Bauermeister","Hannah Dröge","Michael Moeller"],"categories":[],"content":"","date":1583020800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541254,"objectID":"e7c4c8a58d9de6e178c71bb83e8695b9","permalink":"https://jonasgeiping.github.io/publication/geiping-inverting-2020/","publishdate":"2020-09-19T18:47:34.790338Z","relpermalink":"/publication/geiping-inverting-2020/","section":"publication","summary":"The idea of federated learning is to collaboratively train a neural network on a server. Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data. This protocol has been designed not only to train neural networks data-efficiently, but also to provide privacy benefits for users, as their input data remains on device and only parameter gradients are shared. In this paper we show that sharing parameter gradients is by no means secure: By exploiting a cosine similarity loss along with optimization methods from adversarial attacks, we are able to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks. Moreover, we analyze the effects of architecture as well as parameters on the difficulty of reconstructing the input image, prove that any input to a fully connected layer can be reconstructed analytically independent of the remaining architecture, and show numerically that even averaging gradients over several iterations or several images does not protect the user's privacy in federated learning applications in computer vision.","tags":[],"title":"Inverting Gradients -- How Easy Is It to Break Privacy in Federated Learning?","type":"publication"},{"authors":["Andreas Görlitz","Jonas Geiping","Andreas Kolb"],"categories":[],"content":"","date":1572566400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541255,"objectID":"9669837da2d3bc85123d11d445873cae","permalink":"https://jonasgeiping.github.io/publication/gorlitz-piecewise-2019-1/","publishdate":"2020-09-19T18:47:35.182087Z","relpermalink":"/publication/gorlitz-piecewise-2019-1/","section":"publication","summary":"In this paper, we introduce a novel variational approach to estimate the scene flow from RGB-D images. We regularize the ill-conditioned problem of scene flow estimation in a unified framework by enforcing piecewise rigid motion through decomposition into rotational and translational motion parts. Our model crucially regularizes these components by an L0 “norm”, thereby facilitating implicit motion segmentation in a joint energy minimization problem. Yet, we also show that this energy can be efficiently minimized by a proximal primal-dual algorithm. By implementing this approximate L0 rigid motion regularization, our scene flow estimation approach implicitly segments the observed scene of into regions of nearly constant rigid motion. We evaluate our joint scene flow and segmentation estimation approach on a variety of test scenarios, with and without ground truth data, and demonstrate that we outperform current scene flow techniques.","tags":["\"constant rigid motion\"","\"ill-conditioned problem\"","\"image colour analysis\"","\"image segmentation\"","\"image sequences\"","\"implicit motion segmentation estimation approach\"","\"joint energy minimization problem\"","\"L0 rigid motion regularization\"","\"minimisation\"","\"motion estimation\"","\"observed scene\"","\"piecewise rigid motion\"","\"piecewise rigid scene flow techniques\"","\"proximal primal-dual algorithm\"","\"RGB-D images\"","\"rotational motion parts\"","\"scene flow estimation approach\"","\"translational motion parts\"","\"variational approach\""],"title":"Piecewise Rigid Scene Flow with Implicit Motion Segmentation","type":"publication"},{"authors":[],"categories":null,"content":"","date":1569888000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1569888000,"objectID":"4ea9873a827b30dfcae8e35f9b902149","permalink":"https://jonasgeiping.github.io/talk/iccv2019/","publishdate":"2019-10-01T00:00:00Z","relpermalink":"/talk/iccv2019/","section":"talk","summary":"Presented ”Parametric Majorization for Data-Driven Energy Minimization Methods” at ICCV 2019.","tags":[],"title":"ICCV 2019","type":"talk"},{"authors":["Jonas Geiping","Michael Moeller"],"categories":[],"content":"","date":1546300800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541255,"objectID":"0fc175e5a7d2a444cc610b09cfa4b385","permalink":"https://jonasgeiping.github.io/publication/geiping-parametric-2019-2/","publishdate":"2020-09-19T18:47:34.965094Z","relpermalink":"/publication/geiping-parametric-2019-2/","section":"publication","summary":"Energy minimization methods are a classical tool in a multitude of computer vision applications. While they are interpretable and well-studied, their regularity assumptions are difficult to design by hand. Deep learning techniques on the other hand are purely data-driven, often provide excellent results, but are very difficult to constrain to predefined physical or safety-critical models. A possible combination between the two approaches is to design a parametric en- ergy and train the free parameters in such a way that minimizers of the energy correspond to desired solution on a set of training examples. Unfortunately, such formulations typically lead to bi-level optimization problems, on which common optimization algorithms are difficult to scale to modern requirements in data processing and efficiency. In this work, we present a new strategy to optimize these bi-level problems. We investigate surrogate single-level problems that majorize the target problems and can be implemented with existing tools, leading to efficient algorithms without collapse of the energy function. This framework of strategies enables new avenues to the training of parameterized energy minimization models from large data.","tags":[],"title":"Parametric Majorization for Data-Driven Energy Minimization Methods","type":"publication"},{"authors":[],"categories":null,"content":"","date":1530403200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1530403200,"objectID":"aae836ed73c1a55e3e271f9a144f6c43","permalink":"https://jonasgeiping.github.io/talk/ifip/","publishdate":"2018-07-01T00:00:00Z","relpermalink":"/talk/ifip/","section":"talk","summary":"Visited the IFIP TC 7 Conference on System Modelling and Optimization.","tags":[],"title":"IFIP TC 7 Conference on System Modelling and Optimization","type":"talk"},{"authors":[],"categories":null,"content":"","date":1530403200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1530403200,"objectID":"dc3258394dd125f1823e9de85c3cb206","permalink":"https://jonasgeiping.github.io/talk/icml2018/","publishdate":"2018-07-01T00:00:00Z","relpermalink":"/talk/icml2018/","section":"talk","summary":"Visited the International Conference on Machine Learning.","tags":[],"title":"International Conference on Machine Learning, 2018","type":"talk"},{"authors":[],"categories":null,"content":"","date":1527811200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1527811200,"objectID":"f63e17d0b0f33328c6d58e4cebbf7508","permalink":"https://jonasgeiping.github.io/talk/siam2018/","publishdate":"2018-06-01T00:00:00Z","relpermalink":"/talk/siam2018/","section":"talk","summary":"Presented the recent SIAM publication ”Composite Optimization by Nonconvex Majorization-Minimization”","tags":[],"title":"SIAM Conference on Imaging Sciences","type":"talk"},{"authors":[],"categories":null,"content":"","date":1520985600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1520985600,"objectID":"3c0a6aa3ad3be872160a5c37ef4d22d9","permalink":"https://jonasgeiping.github.io/talk/siegen2018/","publishdate":"2018-03-14T00:00:00Z","relpermalink":"/talk/siegen2018/","section":"talk","summary":"Presented a poster regarding composite nonconvex optimization.","tags":[],"title":"Workshop: Imaging and Vision from Theory to Applications","type":"talk"},{"authors":[],"categories":null,"content":"","date":1517443200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517443200,"objectID":"c9616b536e9c9a38453944bce5c92cad","permalink":"https://jonasgeiping.github.io/talk/gamm2018/","publishdate":"2018-02-01T00:00:00Z","relpermalink":"/talk/gamm2018/","section":"talk","summary":"Held a presentation titled ”Composite Optimization by Nonconvex Majorization-Minimization” at the GAMM (German Society for Applied Math and Mechanics) Annual Meeting 2018","tags":[],"title":"GAMM Annual Meeting 2018","type":"talk"},{"authors":[],"categories":null,"content":"","date":1517443200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1517443200,"objectID":"0947794c1820433821988d2e733d9e3f","permalink":"https://jonasgeiping.github.io/talk/winteropt/","publishdate":"2018-02-01T00:00:00Z","relpermalink":"/talk/winteropt/","section":"talk","summary":"Presented a poster with preliminary work on ”Composite Optimization by Nonconvex Majorization- Minimization” and visited the winter school ”Modern Methods in Nonsmooth Optimization”.","tags":[],"title":"Winter School on Modern Methods in Nonsmooth Optimization","type":"talk"},{"authors":["Jonas Geiping","Michael Moeller"],"categories":[],"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541254,"objectID":"479a10e4de17ceeead3cd2847cfde441","permalink":"https://jonasgeiping.github.io/publication/geiping-composite-2018/","publishdate":"2020-09-19T18:47:34.573689Z","relpermalink":"/publication/geiping-composite-2018/","section":"publication","summary":"The minimization of a nonconvex composite function can model a variety of imaging tasks. A popular class of algorithms for solving such problems are majorization-minimization techniques which iteratively approximate the composite nonconvex function by a majorizing function that is easy to minimize. Most techniques, e.g., gradient descent, utilize convex majorizers in order to guarantee that the majorizer is easy to minimize. In our work we consider a natural class of nonconvex majorizers for these functions, and show that these majorizers are still sufficient for a globally convergent optimization scheme. Numerical results illustrate that by applying this scheme, one can often obtain superior local optima compared to previous majorization-minimization methods, when the nonconvex majorizers are solved to global optimality. Finally, we illustrate the behavior of our algorithm for depth superresolution from raw time-of-flight data.","tags":["\"90C26; 90C06; 68U10; 32B20; 65K10; 47J06\"","\"Computer Science - Computer Vision and Pattern Recognition\"","\"Mathematics - Numerical Analysis\"","\"Mathematics - Optimization and Control\""],"title":"Composite Optimization by Nonconvex Majorization-Minimization","type":"publication"},{"authors":["Jonas Geiping","Hendrik Dirks","Daniel Cremers","Michael Moeller"],"categories":[],"content":"","date":1514764800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541254,"objectID":"75fe9ac2a800ad756fc66fd2608ac2e9","permalink":"https://jonasgeiping.github.io/publication/geiping-multiframe-2018/","publishdate":"2020-09-19T18:47:34.862942Z","relpermalink":"/publication/geiping-multiframe-2018/","section":"publication","summary":"The idea of video super resolution is to use different view points of a single scene to enhance the overall resolution and quality. Classical energy minimization approaches first establish a correspondence of the current frame to all its neighbors in some radius and then use this temporal information for enhancement. In this paper, we propose the first variational super resolution approach that computes several super resolved frames in one batch optimization procedure by incorporating motion information between the high-resolution image frames themselves. As a consequence, the number of motion estimation problems grows linearly in the number of frames, opposed to a quadratic growth of classical methods and temporal consistency is enforced naturally.We use infimal convolution regularization as well as an automatic parameter balancing scheme to automatically determine the reliability of the motion information and reweight the regularization locally. We demonstrate that our approach yields state-of-the-art results and even is competitive with machine learning approaches.","tags":[],"title":"Multiframe Motion Coupling for Video Super Resolution","type":"publication"},{"authors":[],"categories":null,"content":"","date":1488326400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1488326400,"objectID":"2e0c8383197d6debafa57c2a412ec26c","permalink":"https://jonasgeiping.github.io/talk/amm3/","publishdate":"2017-03-01T00:00:00Z","relpermalink":"/talk/amm3/","section":"talk","summary":"Visited the ”Workshop: Shape, Images and Optimization”.","tags":[],"title":"3rd Applied Mathematics Symposium Münster","type":"talk"},{"authors":[],"categories":null,"content":"","date":1488326400,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1488326400,"objectID":"9bbb7caa69eec99a8397de2ebfe4d6f0","permalink":"https://jonasgeiping.github.io/talk/iccv2017/","publishdate":"2017-03-01T00:00:00Z","relpermalink":"/talk/iccv2017/","section":"talk","summary":"Presented the work ”Multiframe Motion Coupling for Video Super Resolution” at 11th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition. Visited the International Conference on Computer Vision, 2017.","tags":[],"title":"ICCV and EMMCVPR 2017","type":"talk"},{"authors":["Jonas Alexander Geiping"],"categories":[],"content":"","date":1472688000,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541254,"objectID":"41910690aaeb803acebdb4758177035b","permalink":"https://jonasgeiping.github.io/publication/geiping-image-2016/","publishdate":"2020-09-19T18:47:34.725607Z","relpermalink":"/publication/geiping-image-2016/","section":"publication","summary":"Three-dimensional time series data from confocal fluorescence microscopes is a valuable tool in biological research, but the data is distorted by Poisson noise and defocus blur of varying axial extent. We seek to obtain structural information about the develop- ment of neural tissue from these images and define a segmentation by an appropriate thresholding of reconstructed data. We model the data degradation and develop a reconstruction formulation based on variational methods. Due to imprecise knowledge of the blur kernel we extend local sparsity regularization to a local patch and use this prior as additional regularization. We show favorable analytical properties for this approach, implement the resulting algorithm with a primal-dual optimization scheme and test on artificial and real data.","tags":[],"title":"Image Analysis of Neural Tissue Development: Variational Methods for Segmentation and 3D-Reconstruction from Large Pinhole Confocal Fluorescence Microscopy","type":"publication"},{"authors":[],"categories":null,"content":"","date":1459468800,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1459468800,"objectID":"e052c52026453c57651d397c8412cd70","permalink":"https://jonasgeiping.github.io/talk/dtu/","publishdate":"2016-04-01T00:00:00Z","relpermalink":"/talk/dtu/","section":"talk","summary":"Visited the ”COST Training School on Algebraic Reconstruction Methods in Tomography” and workshop ”HD-Tomo Days”.","tags":[],"title":"Training School and Workshop at DTU University","type":"talk"},{"authors":[],"categories":null,"content":"","date":1441065600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1441065600,"objectID":"becc88d711f702f2f2ceddaa3781c617","permalink":"https://jonasgeiping.github.io/talk/my-talk-name/","publishdate":"2015-09-01T00:00:00Z","relpermalink":"/talk/my-talk-name/","section":"talk","summary":"Visited ”Summer School on Inverse Problems” and workshop on ”Variational Methods for Dynamic Inverse Problems and Imaging”","tags":[],"title":"1st Applied Mathematics Symposium Münster","type":"talk"},{"authors":["Jonas Alexander Geiping"],"categories":[],"content":"","date":1409529600,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1600541254,"objectID":"db70ce79017c0fb23c9e4aa9b54f1c81","permalink":"https://jonasgeiping.github.io/publication/geiping-comparison-2014/","publishdate":"2020-09-19T18:47:34.511122Z","relpermalink":"/publication/geiping-comparison-2014/","section":"publication","summary":"","tags":[],"title":"Comparison of Topology-Preserving Segmentation Methods and Application to Mitotic Cell Tracking","type":"publication"}]