Rebecca

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 37 total)
  • Author
    Posts
  • in reply to: open webcam inputs screen blank #689

    Rebecca
    Keymaster

    Hi Prachi, I believe there’s a problem with Processing’s use of video with some of the newer versions of Windows. You might try running from the Processing source code rather than the executable. If that doesn’t work and you’re set on using a webcam, I’d probably recommend using OpenFrameworks instead, since Processing’s support is so spotty at the moment.

  • in reply to: Unable to open webcam inputs #672

    Rebecca
    Keymaster

    Hi Federico,
    I’m not surprised that Big Sur is giving problems, then! Unfortunately I don’t have access to a Big Sur machine so I can’t try things out for myself.
    All I can say at this point is this:
    * Perhaps try out the code at https://www.doc.gold.ac.uk/~mas01rf/WekinatorDownloads/wekinator_examples/all_source_zips/SimpleVideoInputWithProcessing_100Inputs_Catalina.zip (running within either the latest stable version of Processing 3, or if that doesn’t work, running in the alpha version of Processing 4)
    * If that doesn’t work, then looking at the Processing forums for information about using webcams on Big Sur is probably the next step. In particular, I’d recommend using Processing 4.
    * Ultimately, because of the problems introduced by Catalina, you’re probably better off using openFrameworks (or another environment, like Max/MSP + Jitter) if you want to build something that uses realtime webcam input.

    I hope this helps, and please do post an update!

    Rebecca

  • in reply to: Unable to open webcam inputs #667

    Rebecca
    Keymaster

    Hi Federico, are you on Catalina by any chance? There’s been some changes (not good) to the way that Processing applications interface with the webcam on Catalina. If you’re able to run Processing source code directly, please try out the new version below. I’ll try to make an executable of this to put on the website later today.

    ———-

    /**
     * Temp code. Adapted from mirror by dan shiffman.
     */ 
     
    import processing.video.*;
    
    import oscP5.*;
    import netP5.*;
    
    // Size of each cell in the grid
    int boxWidth = 64;
    int boxHeight = 48;
    
    int numHoriz = 640/boxWidth;
    int numVert = 480/boxHeight;
    color[] downPix = new color[numHoriz * numVert];
    
    // Number of columns and rows in our system
    //int cols, rows;
    // Variable for capture device
    Capture video;
    
    OscP5 oscP5;
    NetAddress dest;
    
    void setup() {
      size(640, 480);
      frameRate(30);
      //cols = width / cellSize;
      //rows = height / cellSize;
      colorMode(RGB, 255, 255, 255, 100);
    
      // This the default video input, see the GettingStartedCapture 
      // example if it creates an error
      String[] cameras = Capture.list();
      video = new Capture(this, width, height, cameras[0]);
      
      // Start capturing the images from the camera
      video.start();  
      
      background(0);
      
        oscP5 = new OscP5(this,9000);
      dest = new NetAddress("127.0.0.1",6448);
    }
    
    void draw() { 
      if (video.available()) {
        video.read();
        video.loadPixels();
      
        // Begin loop for columns
        int counter=0;
        for (int i = 0; i < numHoriz; i++) {
          // Begin loop for rows
          for (int j = 0; j < numVert; j++) {
          
            // Where are we, pixel-wise?
            int x = i*boxWidth;
            int y = j*boxHeight;
           // int loc = (video.width - x - 1) + y*video.width; // Reversing x to mirror the image
           int loc = x + y*video.width;
          
            float r = red(video.pixels[loc]);
            float g = green(video.pixels[loc]);
            float b = blue(video.pixels[loc]);
            // Make a new color with an alpha component
            color c = color(r, g, b);
            
            //SEcond mode:
            int tot = boxWidth * boxHeight;
            float rtot = 0;
            float gtot = 0;
            float btot = 0;
            for (int k = 0; k < boxHeight; k++) {
               for (int l = 0; l < boxWidth; l++) {
                   int loc2 = loc + k*width + l;
                   rtot += red(video.pixels[loc2]);
                   gtot += green(video.pixels[loc2]);
                   btot += blue(video.pixels[loc2]);
                   
               }
            }
            color c2 = color((int)(rtot/tot), (int)(gtot/tot), (int)(btot/tot));
          
            // Code for drawing a single rect
            // Using translate in order for rotation to work properly
           // pushMatrix();
           // translate(x+cellSize/2, y+cellSize/2);
            // Rotation formula based on brightness
            //rotate((2 * PI * brightness(c) / 255.0));
            rectMode(CENTER);
            fill(c2);
            noStroke();
            // Rects are larger than the cell for some overlap
            rect(x+boxWidth/2,y+boxHeight/2, boxWidth, boxHeight);
            downPix[counter++] = c2;
          //  popMatrix();
          }
        }
      }
      
        if(frameCount % 2 == 0) {
          sendOsc(downPix);
    
        } 
    }
    
    void sendOsc(int[] px) {
      OscMessage msg = new OscMessage("/wek/inputs");
     // msg.add(px);
       for (int i = 0; i < px.length; i++) {
          msg.add(float(px[i])); 
       }
      oscP5.send(msg, dest);
    }
  • in reply to: Starter's question & problem on opening #647

    Rebecca
    Keymaster

    Hi Hemm,

    Nice to have you here!

    To answer your questions: First, to open a saved file, you have to open Wekinator first, then use the menu system to open a saved file. I know this is super annoying, but it’s an artefact of writing in one programming language (Java) to support multiple operating systems, with one developer! I’d love to fix it but it hasn’t happened yet. But rest assured you can open your files this way.

    Second, I recommend the following to start learning more: (1) do the online walkthrough (you’ve probably done this already), (2) start looking at examples from http://www.wekinator/examples; hopefully you’ll find some in a programming language/environment you’re already a bit familiar with; these will show you how to get started; you can always refer to language-specific tutorials (I recommend Processing especially) to learn more about coding (3) my course on Kadenze goes into lots of detail about machine learning basics, more than enough to get you very familiar with how ML works in Wekinator and how to make effective projects with ML; I’ve also got a newer course on Future Learn which uses an online tool (MIMIC) instead of Wekinator to teach the basics. This course is much shorter but may also be helpful if you’re a total beginner.

    Hope this helps!

    Rebecca

  • in reply to: Wekinator Machine #643

    Rebecca
    Keymaster

    That is super cool, Ryan! Thanks for sharing! Are you happy for me to put this on the Wekinator examples list (with credit to you) next time I do a site update?

    Rebecca

  • in reply to: GPU vs CPU fo DTW #630

    Rebecca
    Keymaster

    Hi Arthur,

    Thanks for the post. Currently I doubt you would see any benefit from running Wekinator DTW on a GPU. Implementation of the FastDTW matching could indeed be parallelised, but it’s not currently done in Wekinator. One could certainly refactor the code to try to get some performance gains, but if speed is a big concern then I might recommend using an existing DTW/FastDTW library and building something from scratch that’s well tailored to the type of sensor(s)/gesture(s)/etc. that you’re using. Wekinator’s DTW implementation makes certain assumptions about gesture lengths in the training set and during runtime, and you can tweak these in the interface right now to try to improve recognition time and/or accuracy for particular types of gestures. But you could almost certainly do even better if you build in assumptions that are more appropriate to a particular problem.

    By the way, if you’re using Unity, you may be interested in the new InteractML project: http://interactml.com/ We’re building a set of IML tools directly into Unity, using a visual programming paradigm. I’m really excited about it!

    Rebecca

  • in reply to: WekiMini.jar not displaying on small raspi screen #627

    Rebecca
    Keymaster

    Hi Viktor,

    Yes, unfortunately the Wekinator main GUI is about 350 x 850. It would take substantial redesign to make it fit on such a small screen.

    One thing I’ve wanted to do for a while is make a “headless” option so that it can run without a GUI, through control entirely by OSC messages (and perhaps commandline). If I get some more development cycles that is near the top of my list, though I’d be happy for someone else to take this on.

    Rebecca

  • in reply to: Multiple models in one project #621

    Rebecca
    Keymaster

    Hi Kalien,

    It’s not possible to combine DTW with classification and/or regression in a single project, due to how different the processes of recording examples and running models are. However, if you’d like to run DTW at the same time as classification and/or regression, you can run two Wekinator projects simultaneously (you’ll just need to set one up to listen for incoming features on a different port, and ensure you’re sending features to that port).

    Best
    Rebecca

  • in reply to: Problem connecting Processing with Wekinator #619

    Rebecca
    Keymaster

    Hi there,

    Thanks for posting. First, you won’t want to use Mouse_ForDTW_2Inputs with a classifier or continuous model; it’ll only work with dynamic time warping.

    So the issue seems mainly to be what is going wrong with Simple_Mouse-DraggedObject_2Inputs. The code here is only very slightly different from Simple_MouseXY_2Inputs, so I’m pretty certain it’s a problem with how it’s being set up and run rather than this particular piece of code not working on your computer. The easiest way to troubleshoot from scratch will be to restart your computer (thus killing any mystery OSC processes potentially in the background), then only start up Simple_Mouse_DraggedObject_2Inputs and Wekinator. If you’re on Windows, make sure you enable network traffic for Simple_Mouse_DraggedObject_2Inputs if you get a pop-up box asking you; otherwise it won’t have the permission to send OSC. Make sure Wekinator is set to use 2 inputs. Then click on the running Simple_Mouse_DraggedObject_2Inputs program, keeping an eye on Wekinator, and move the box on the screen around. You should see Wekinator’s indicator light for OSC Inputs turn green. Alternatively, if you’ve got a small screen, hit Record in Wekinator, then move the green box around in Simple_Mouse_DraggedObject_2Inputs, then return to Wekinator, and you should see some examples have been recorded.

    Rebecca

  • in reply to: DTW project with multiple inputs/outputs #609

    Rebecca
    Keymaster

    Hi Joao,

    DTW allows for multiple inputs — e.g., you could recognise a gesture made using all 10 fingers. However, it doesn’t allow for multiple parallel outputs — i.e., recognising a set of 5 gestures made with 1 finger, a set of 3 other gestures made with another finger, etc. You are correct that you’d need multiple instances of Wekinator running to implement this sort of thing, and if you’re recognising the same gestures then you could just load the same model into each one. However, due to the inefficiency of DTW you might find this *very* slow (and this wouldn’t change even if you could use the same model in a single project for each finger), so I’d recommend you think about another approach if you find this doesn’t work as accurately or quickly as you need.

    Rebecca

  • in reply to: Wekinator general questions #608

    Rebecca
    Keymaster

    Hi Joao,

    Thanks for posting. Briefly, there’s just one main developer of Wekinator (me) and I occasionally get funds to support other development, but I don’t currently have any funding to add more features. So it’s just a matter of being able to hack on it a few times a year at the moment, and during these times I prioritise changes that most seriously impact a lot of students (Kadenze or in-person students) or people using the software quite seriously in professional work. I’m aware of the issues you’ve mentioned but there are frankly higher-priority things in the queue, so I can’t promise I will be able to implement them. Please feel free to add any of these as issues on the project GitHub page so they are noted (and other users can voice their support for them as well). The reality is that these are mainly the by-products of having one main developer, and trying to efficiently support a cross-platform software. I wish these quirks weren’t there, but they are. If anyone would like to fix them, please feel free to fork on github and we can have a chat about whether/how to merge back in!

    Best
    Rebecca

  • in reply to: Wekinator general questions #607

    Rebecca
    Keymaster

    Hi Joao,

    Thanks for posting. Briefly, there’s just one main developer of Wekinator (me) and I occasionally get funds to support other development, but I don’t currently have any funding to add more features. So it’s just a matter of being able to hack on it a few times a year at the moment, and during these times I prioritise changes that most seriously impact a lot of students (Kadenze or in-person students) or people using the software quite seriously in professional work. I’m aware of the issues you’ve mentioned but there are frankly higher-priority things in the queue, so I can’t promise I will be able to implement them. Please feel free to add any of these as issues on the project GitHub page so they are noted (and other users can voice their support for them as well). The reality is that these are mainly the by-products of having one main developer, and trying to efficiently support a cross-platform software. I wish these quirks weren’t there, but they are. If anyone would like to fix them, please feel free to fork on github and we can have a chat about whether/how to merge back in!

    Best
    Rebecca

  • in reply to: Complex Gestures with Wekinator and Kinect #533

    Rebecca
    Keymaster

    Yes, this is certainly possible. I recommend especially paying attention to the Kadenze lectures on features and capturing change over time.

  • in reply to: Recieve OSC input over IP #531

    Rebecca
    Keymaster

    Wekinator will be perfectly happy to receive OSC from outside networks (it can’t even tell where a message is coming from). You will need to ensure that Muse Monitor is sending to the right IP address, that your network isn’t blocking OSC messages, that Muse Monitor is sending to the right port (6448 by default), that Muse Monitor is sending the right message name (if not /wek/inputs, Wekinator’s default, then you’ll need to change the OSC message name in Wekinator), and that Muse Monitor is sending a single OSC message with all its features sent in the same message (the same number of features every time) as floats. (See http://www.wekinator.org/detailed-instructions/#1_Setting_up_communication_between_inputs_outputs_and_Wekinator for more info.) If Muse Monitor isn’t capable of doing this, you’ll need to use a separate program like OSCulator or something you’ve written yourself to translate Muse Monitor messages into these types of messages.

  • in reply to: FaceOSC & Wekinator on Mac #519

    Rebecca
    Keymaster

    Hi Pratyush,

    Thanks for posting. There is an issue with openFrameworks-generated apps on some versions of OS X, specifically having to do with OS X Gatekeeper. This may be causing the problem you’re seeing.

    One easy fix is to move the app to another directory (e.g., to the desktop) and then back. Then try double-clicking on it again. If it’s the Gatekeeper issue causing your problem, this should fix it.

Viewing 15 posts - 1 through 15 (of 37 total)