{"id":1944,"date":"2025-05-02T16:09:21","date_gmt":"2025-05-02T14:09:21","guid":{"rendered":"https:\/\/mpr-projects.com\/?page_id=1944"},"modified":"2025-08-01T10:47:57","modified_gmt":"2025-08-01T08:47:57","slug":"portfolio","status":"publish","type":"page","link":"https:\/\/mpr-projects.com\/","title":{"rendered":"Portfolio"},"content":{"rendered":"\n<div style=\"height:1px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading is-style-default\">Reinforcement Learning Agent<\/h3>\n\n\n\n<p>This project was about testing different reinforcement-learning algorithms. After going through the <a href=\"https:\/\/davidstarsilver.wordpress.com\/teaching\/\" target=\"_blank\" rel=\"noreferrer noopener\">lecture notes<\/a> of David Silver&#8217;s (DeepMind) UCL course on reinforcement learning, I wanted to apply the methods to a game that was i) simple enough to train on my single GPU and ii) fun and a bit out of the ordinary.<\/p>\n\n\n\n<p>I picked the <em><a href=\"https:\/\/en.wikipedia.org\/wiki\/Royal_Game_of_Ur\" target=\"_blank\" rel=\"noreferrer noopener\">Royal Game of Ur<\/a><\/em> because it&#8217;s a game that had been played for literally thousands of years but hardly anybody plays it or even knows about it today. So any strategies that might have been common ~3000 years ago have probably been forgotten &#8211; which made this game an interesting choice.<\/p>\n\n\n\n<figure class=\"wp-block-embed alignright is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Gameplay Royal Game of Ur - mpr vs Reinforcement-Learning agent\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/-thqzxv9uro?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>I implemented and tested pretty much all methods covered in David Silva&#8217;s lecture. Specifically, I implemented <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Q-Learning with Monte Carlo, Temporal Difference (td0 and tdn) and TD-Lambda, both on- and off-policy,<\/li>\n\n\n\n<li>Policy-Gradient with Monte Carlo, and<\/li>\n\n\n\n<li>Actor-Critic with Monte Carlo, Temporal Differences (td0 and tdn) and TD-Lambda. The Monte Carlo method was implemented on- and off-policy. <\/li>\n<\/ul>\n\n\n\n<p>Of these methods on-policy Q-Learning with Monte Carlo (q_mc) resulted in the strongest player. You can find the code and weights to play against my best model <a href=\"https:\/\/github.com\/mpr-projects\/Royal_Game_of_Ur_RL_Agent\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n\n\n\n<p>The game involves <em>a lot <\/em>of luck so a good strategy can only do so much. But when I first played against the q_mc agent I ended up losing most games. Once I got into the game I started winning about 50% of the games (or maybe a bit more often).<\/p>\n\n\n\n<p>I also used the code to create an agent for a common version of the game <a href=\"https:\/\/en.wikipedia.org\/wiki\/Mancala\" data-type=\"link\" data-id=\"https:\/\/en.wikipedia.org\/wiki\/Mancala\" target=\"_blank\" rel=\"noreferrer noopener\">Mancala<\/a>. It&#8217;s available in the same repository but it doesn&#8217;t include a GUI or any weights.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> JAX, Haiku (training), Python (sample creation)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading is-style-default\">Paper: Variable-Input Deep Operator Networks<\/h3>\n\n\n\n<p>This is <a href=\"https:\/\/arxiv.org\/abs\/2205.11404\" target=\"_blank\" rel=\"noreferrer noopener\">a paper<\/a> that resulted from my time as a Research Assistant at <a href=\"https:\/\/camlab.ethz.ch\/the-group\/group-head.html\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">Prof. Dr. Mishra<\/a>&#8216;s group at the ETH. Our objective was to create a machine learning model that can learn the solutions to partial differential equations (PDEs) whose observations are given on an irregular grid. My main responsibility was the model structure and implementation. You can find the code on <a href=\"https:\/\/github.com\/mpr-projects\/Variable-Input-Deep-Operator-Networks\/tree\/main\" target=\"_blank\" rel=\"noreferrer noopener\">github<\/a>.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> PyTorch (training), TensorBoard (visualization), C++ \/ Python (training data creation)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading\">Creation of a Video Streaming Site<\/h3>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69debcad985f6&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69debcad985f6\" class=\"wp-block-image alignright size-medium wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"170\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/07\/screenshot_course_page-300x170.png\" alt=\"\" class=\"wp-image-2254\" srcset=\"https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/07\/screenshot_course_page-300x170.png 300w, https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/07\/screenshot_course_page-1024x579.png 1024w, https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/07\/screenshot_course_page-768x434.png 768w, https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/07\/screenshot_course_page-1536x868.png 1536w, https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/07\/screenshot_course_page-2048x1158.png 2048w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure>\n\n\n\n<p>In this project I created a highly optimized <a href=\"https:\/\/mpr-projects.net\" data-type=\"link\" data-id=\"https:\/\/mpr-projects.net\" target=\"_blank\" rel=\"noreferrer noopener\">video streaming website<\/a> that contains some of my YouTube videos and other content. The website is available in multiple languages and it features a secure login section for watching <em>premium<\/em> content (currently populated with placeholder data).<\/p>\n\n\n\n<p>The back end is written with <a href=\"https:\/\/flask.palletsprojects.com\/en\/stable\/\" target=\"_blank\" rel=\"noreferrer noopener\">flask<\/a>, the front end with HTML, CSS and JavaScript. The layout of the video page is created with flexbox. The backend is deployed on AWS&#8217;s Elastic Beanstalk. The front end, videos and other data is saved in S3 buckets. I also use ElastiCache for server-side caching of often-used assets and a PostgreSQL database for persistent storage. The entire website sits behind the CloudFront CDN which results in fast loading. The entire stack is defined using AWS&#8217; Cloud Development Kit so it can easily be adjusted or recreated if required.<\/p>\n\n\n\n<p>I wrote this page and set it up AWS in about 10 days with help of an LLM. The LLM made the coding process a lot faster but it really struggled once the code base became a bit larger. At times it also suggested wrong or inefficient code so my own code-understanding and practice with debugging was very important to get this up and running quickly.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> Flask \/ Python (backend), HTML, JavaScript, CSS (frontend), AWS (infrastructure)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading\">3D-Reconstructions<\/h3>\n\n\n\n<figure class=\"wp-block-embed alignright is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Automating 3D Reconstructions: First Milestone\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/UNOtbhvH21o?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>This is a large, ongoing project where I&#8217;m developing a machine to automate 3d-reconstructions of small objects. A short video about the first prototype, which uses photogrammetry, is shown on the right. <\/p>\n\n\n\n<p>The first prototype uses mostly components that I already had at home: a camera intended for photography (Fuij X-T2), stepper motors, timing belts, 3d-printed parts to connect everything, and an Arduino and a Raspberry Pi to control everything.<\/p>\n\n\n\n<p>This first prototype uses photogrammetry so it works well for objects whose surfaces have lots of features. Homogeneous or highly reflective surfaces are not reconstructed well due to a lack of features.<\/p>\n\n\n\n<p>The actual reconstructions use a mix of code that&#8217;s already implemented in the open-source program <a href=\"https:\/\/github.com\/alicevision\/Meshroom\" target=\"_blank\" rel=\"noreferrer noopener\">Meshroom<\/a> and of code that I wrote to extend Meshroom. While I&#8217;ve read most papers that describe the methods used in Meshroom (e.g. SfM and MVS), I didn&#8217;t implement those myself. My extensions focused mostly on using the known camera position from my machine and on aligning multiple scans from different sides.<\/p>\n\n\n\n<p>For the second prototype I&#8217;m planning to use an industrial machine vision camera, a liquid lens for fast focusing, and a combination of different methods, such as Photometric Stereo, to ensure reliable reconstructions for homogeneous or reflective surfaces. The plan is to implement them in a high-performance C++ code (not using Meshroom).<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> Docker \/ CMake (compilation), C++ \/ Python (code adjustments and extensions), Arduino Language (controlling hardware), Python (coordinating camera and hardware)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading\">Computer Vision Theory<\/h3>\n\n\n\n<p>As I&#8217;ve been developing the machine to automate 3d-reconstructions, I&#8217;ve delved deeply into some theory related to computer vision. For example, getting an accurate reconstruction requires an accurate calibration of our camera. So we want to find the focal length, principal points, distortion coefficients, etc. Choosing an appropriate camera and lens (for the second prototype) requires an understanding of image resolution (the system&#8217;s ability to resolve details) and the factors that affect it.<\/p>\n\n\n\n<figure class=\"wp-block-embed alignleft is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Computer Vision: The Camera Matrix\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/Hz8kz5aeQ44?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>When I dig into theory I usually write summaries to explain the topic in detail. Writing summaries helps me memorize concepts and it exposes holes in my reasoning.<\/p>\n\n\n\n<p>I&#8217;ve turned some of my summaries into videos that try to explain ideas without going too much into mathematical details. On the left there&#8217;s a video where I derive the camera matrix from the pinhole model. <a href=\"https:\/\/youtu.be\/_sGG9liSDvM\" target=\"_blank\" rel=\"noreferrer noopener\">Here<\/a> you can find a video about image resolution and contrast, and about how diffraction, lens aberrations and our sensor affect it.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> Blender (Visualization), OpenCV (Image Processing)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading\">Smaller Computer Vision Projects<\/h3>\n\n\n\n<p>Below are two smaller projects that didn&#8217;t require as much time as the other projects on this page.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<div style=\"height:2px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading\">Stereo Vision and Semi-Global Matching<\/h4>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69debcad98d77&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69debcad98d77\" class=\"wp-block-image alignleft size-thumbnail wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/05\/DSCF0207_smaller-150x150.jpg\" alt=\"\" class=\"wp-image-2079\"\/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure>\n\n\n\n<p>Semi-Global Matching is a common algorithm used in 3d-Reconstructions and in Stereo Vision. I wanted to learn more about it and to get some hands-on experience with it. So after reading <a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/1467526\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">Mr Hirschm\u00fcller&#8217;s paper<\/a> I built a simple stereo setup, using two Raspberry Pi Camera Modules v3 and a Raspberry Pi 5. Then I calibrated them and I used OpenCV to implement the computation of disparity maps and point clouds. Since the quality of the output depends heavily on the values of the parameters used in SGM I built a GUI to tune them. Visualization of the point cloud is done with <a href=\"https:\/\/www.open3d.org\/\">Open3D<\/a>. See <a href=\"https:\/\/mpr-projects.com\/index.php\/2025\/05\/08\/stereo-vision-with-two-rpi-cameras\/\" data-type=\"post\" data-id=\"2078\">this blog post<\/a> for more details.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> Python (general and camera control), OpenCV (Image Processing), Matplotlib \/ Open3D (visualization)<\/p>\n\n\n\n<div aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h4 class=\"wp-block-heading\">YOLO against Pigeons<\/h4>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69debcad99168&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69debcad99168\" class=\"wp-block-image alignright size-thumbnail wp-lightbox-container\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/mpr-projects.com\/wp-content\/uploads\/2025\/05\/DSCF0210-150x150.jpg\" alt=\"\" class=\"wp-image-2199\"\/><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Enlarge\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewBox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure>\n\n\n\n<p>This is a small project where I use YOLO to keep pigeons away from my balcony. Apparently pigeons can hear much lower sound frequencies than humans. While most humans can hear down to about 20Hz, pigeons can hear much lower frequencies. So I set up a Raspberry Pi with two cameras and I periodically (every 10 seconds) capture the balcony. The images from the two cameras are fed into the pre-trained YOLO v5 model from Ultralytics. If any pigeons are detected then the RPi plays a random low-frequency wave in the range of 7Hz to 15Hz at maximum amplitude. The sound wave is inaudible to (most) humans but the pigeons take flight immediately. After 5 seconds my cameras take another set of pictures and if no pigeons are detected then the sound stops playing. Since the two cameras can&#8217;t capture the entire balcony at once I&#8217;ve put the entire device on a turntable with a servo motor underneath it. The search for pigeons is repeated at a set of different rotation angles such that the entire balcony is covered. The pre-trained model can&#8217;t detect pigeons specifically, it only has a category <em>birds<\/em>. That&#8217;s ok in my case because pigeons are the only birds that visit my balcony.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> Python (servo and general control), PyTorch \/ YOLO (object detection)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading\">DIY Digital Spectroscope<\/h3>\n\n\n\n<p>In this project I combined an <a href=\"https:\/\/www.amazon.de\/-\/en\/Scientific-U19500-Spectroscope-Examination-Spectrum\/dp\/B005LY493O?crid=23MFXOT4UZLIF&amp;dib=eyJ2IjoiMSJ9.zAKDheetloxIK0Mv-R3p-3xuco7aBAOVx5Jat69TYohfp5na9kyzpM0fISFImvnDKcGzIO0etjOkfmuR6_o5qtTte_o6QvFGCVMOcopsOp2WWQnAvunBUMvgvInRhb7AnU3PpKCnfuym_upzkGhMfPzrlMa88S-uunDCibvUvYnRv1DZvnprAjy-2_ZEEGEkvlL1BNLSXkpNDfmm_GDVxHb7mC_zax4VaJRoiARyCVqpwfy06R3HyQhR4teNAAh9LrS5TE1fxAzEqNnDhQ9bRCzIsILIx13oL6kS2PfYeBI.G-73fL5WHXT6zUoNg7epoh96S_a2k6cnkoSoYuUJpMs&amp;dib_tag=se&amp;keywords=pocket+spectroscope&amp;qid=1746201766&amp;sprefix=pocket+spect%2Caps%2C439&amp;sr=8-2&amp;ufe=app_do%3Aamzn1.fos.1d0000e1-44b8-40d1-a25b-0cacf650cfb8\" target=\"_blank\" rel=\"noreferrer noopener\">analogue pocket spectroscope<\/a> and a digital camera to get a DIY digital spectroscope. This was a challenging project because I didn&#8217;t know how exactly the analog pocket spectroscope worked on the inside. So to get reliable results I had to do lots of tests so I could infer how the analog spectroscope works.<\/p>\n\n\n\n<figure class=\"wp-block-embed alignright is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Building an accurate DIY Spectroscope\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/cYWU4iq_pRU?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>Since I was using a camera intended for photography (not a mono camera), there was a color filter array on top of the sensor. So each pixel is only sensitive to a subset of all the light that reaches the sensor (+ there&#8217;s an IR cutoff filter). To be able to measure the entire visible spectrum I had to combine the signals from the three RGB measurements, which required a lot calibration.<\/p>\n\n\n\n<p>Calibrating the digital spectroscope was another big challenge. While the calibration of wavelengths (i.e. the horizontal axis on a chart showing the measured spectrum) can be done relatively easily by using the known spectral lines of the sun, their relative intensity is not as obvious because it requires a light source with a known, relatively flat, spectrum. If you&#8217;re interested in the details then please check out the video, it contains lot&#8217;s of information about the calibration process.<\/p>\n\n\n\n<p>The code that generates the digital spectrum from an image taken by the camera can be found <a href=\"https:\/\/github.com\/mpr-projects\/DIY-Spectroscope\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> Python \/ OpenCV (image processing), tkinter (visualization)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-layout-flow wp-block-group-is-layout-flow\">\n<h3 class=\"wp-block-heading\">Simulation of a CNC-Machine<\/h3>\n\n\n\n<figure class=\"wp-block-embed alignright is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"CNC Simulator Demo\" width=\"500\" height=\"375\" src=\"https:\/\/www.youtube.com\/embed\/EctmQ5b3R3o?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>In this project I wrote a simulation of a CNC-Machine. The input to the program is <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>a file containing the G-Code that you want to run,<\/li>\n\n\n\n<li>the drill-bit that you want to use and<\/li>\n\n\n\n<li>a 3d model of the workpiece that you&#8217;ll cut into.<\/li>\n<\/ul>\n\n\n\n<p>The simulator then runs the code and outputs a 3d model of the finished workpiece. The exact shape of the workpiece depends on the drill bit that you&#8217;ve chosen, just like on a real CNC-Machine.<\/p>\n\n\n\n<p>The simulator also simulates the movement of the parts of a simplified CNC-Machine. During the simulation it checks if there are any collisions between (non-cutting) parts of the machine and the workpiece and it prints a warning if that&#8217;s the case. To find the correct cuts I use a sweeping algorithm. So I use the drill profile at the beginning and at the end of a movement and connect them appropriately to get a mesh of the whole cut. This mesh is then subtracted from the workpiece. This boolean operation becomes expensive for complex geometries. So I&#8217;ve split the workpiece into smaller segments and I compute the boolean on each segment that overlaps with a machine part (in parallel). The simulation works in full 3d space with arbitrary drill bit geometries.<\/p>\n\n\n\n<p>There are a lot more efficient algorithms for simple 3-axes machines like the one I&#8217;ve implemented here. This simulator is not intended to replace them. Rather, I created it as a proof-of-concept for more complicated machines (e.g. robot arms with many axes). Since its core functionality works in full 3d space it could readily be extended to cover machines with arbitrary moving parts.<\/p>\n\n\n\n<p>The code uses the C++ library <a href=\"https:\/\/github.com\/elalish\/manifold\" data-type=\"link\" data-id=\"https:\/\/github.com\/elalish\/manifold\" target=\"_blank\" rel=\"noreferrer noopener\">manifold<\/a> for fast boolean operations with guaranteed manifold output. I&#8217;ve written the core of the program in C++, parallelization uses OpenMP. The command line version of the simulator runs the C++ code directly. The GUI version is written in Python with VTK\/Qt6 and it accesses the C++ code via Python bindings.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>Tech:<\/strong> C++ (computation), OpenMP (parallelization), Python \/ VTK \/ Qt6 (visualization)<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Reinforcement Learning Agent This project was about testing different reinforcement-learning algorithms. After going through the lecture notes of David Silver&#8217;s (DeepMind) UCL course on reinforcement learning, I wanted to apply the methods to a game that was i) simple enough to train on my single GPU and ii) fun and a bit out of the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_eb_attr":"","footnotes":""},"class_list":["post-1944","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/pages\/1944","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/comments?post=1944"}],"version-history":[{"count":95,"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/pages\/1944\/revisions"}],"predecessor-version":[{"id":2268,"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/pages\/1944\/revisions\/2268"}],"wp:attachment":[{"href":"https:\/\/mpr-projects.com\/index.php\/wp-json\/wp\/v2\/media?parent=1944"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}