Isolated operant component in the flight simulator

on Monday, January 21st, 2019 2:43 | by

Isolating operant component removes preference for left and right turning.

Each period is set to 120 seconds, meaning that the flies are getting a total of 8 minutes of training. The flies are performing two initial pre-tests, one test after 4 minutes of training and two final test periods. For the entire duration of the experiments flies are given a color indication if they are being punished or not as a result of left or right turning, this is a composite learning control. For the final test periods the colors have been removed, meaning that we are isolating the operant component. This is different from the previous tests that has been done where flies were not challenged with colors but just relied on their own behavior to determine which side is being punished. A removal of the helping colors resulted in a lack of preference for left- or right-turning maneuvers.

Print Friendly, PDF & Email

Come work with us on FoxP!

on Wednesday, December 19th, 2018 2:52 | by

We are looking for a PhD student for behavioral experiments with Drosophila fruit flies with manipulated FoxP function.

The human orthologues of the fly FoxP gene are the FOXP1-4 genes. Mutations in the FOXP2 gene cause verbal dyspraxia, a form of articulation impairment. Humans learn to articulate phonemes and words by a form of motor learning we can model in flies. Supporting the conceptual analogy of motor learning in humans and flies, manipulations of the fly FoxP gene also lead to impairments in motor learning.

FoxP isoform B expression pattern in the adult brain (green). Counterstaining: Bruchpilot (red)

In the past year, graduate student Ottavia Palazzo used CRISPR/Cas9 to edit the FoxP gene locus, tagging the gene with reporters. These reporters allow us to manipulate not only the gene, but also the neurons which express FoxP. The candidate will work closely with Ottavia to design behavioral experiments characterizing the various manipulations of the different neuronal populations for their involvement in the form of motor learning we use, operant self-learning at the torque meter:

The position is fully funded by a grant from the German funding agency DFG, with full healthcare, unemployment, etc. benefits. It includes admission and tuition to the “Regensburg International Graduate School of Life Sciences“. Starting date is as soon as convenient.

The successful candidate will have a Master’s degree or equivalent. They will be proficient in English as our group is composed of international members. The ideal candidate will have some training in behavioral experiments in Drosophila or other animals, some coding experience and an inclination towards electronics. However, all of these skills can also be learned during the project.

We are a small, international  group consisting of a PI (Björn Brembs), a postdoc (Anders Eriksson), one more graduate student besides Ottavia (Christian Rohrsen) and a technician. We are an open science laboratory and so one aspect of the project will involve a new open science initiative in our laboratory, where we have developed a simple method to make our behavioral data openly accessible automatically, i.e., without any additional efforts by the experimenter. This entails at least two advantages for the candidate in addition to doing science right: the data are automatically backed up and there is no need for a data management plan.

Regensburg is a university town in Bavaria, Germany with about 120k inhabitants and a vibrant student life, due to the 20k students enrolled here. The University of Regensburg is an equal opportunity employer.

Interested candidates should contact Björn Brembs with a CV and a brief letter of motivation.

Print Friendly, PDF & Email

17d flight simulator

on Monday, October 1st, 2018 2:58 | by

Print Friendly, PDF & Email

Joystick Update

on Monday, July 30th, 2018 2:08 | by

Role of dopaminergic neurons in operant behaviour

on Friday, July 27th, 2018 3:54 | by

Positive Control: Gr28bd-G4, TrpA1-G4

Parameters: Light: intensity (500 Lux side, 1000 Lux bottom); frequency = 20Hz; Delay = 1 ms; Duration = 9.9 ms; volts = 6.4

Red lines: completed

mb025b: not selected against tubby

Print Friendly, PDF & Email

reinforcement scores

on Monday, July 23rd, 2018 2:21 | by

Below is given the plot of effect-sizes of reinforcement of 30 genotypes. On the y-axis are the PI values for learning effect sizes.  These scores are calculated by taking the average of PI values of training periods and then subtracting pretest PI values from it.

Reinforcement scores = mean of training score – pretest PI score

Print Friendly, PDF & Email

The Tmaze Experiments : Screen results as on 22-7-18

on Sunday, July 22nd, 2018 6:41 | by


Yellow 1 (Positive Control): Gr28bd-G4, TrpA1-G4

Parameters: Light: intensity (500 Lux side, 1000 Lux bottom); frequency = 20Hz; Delay = 1 ms; Duration = 9.9 ms; volts = 6.4

Print Friendly, PDF & Email

Finding the interesting lines

on Friday, July 20th, 2018 3:46 | by

This is the correlation from the T-maze experiments from Gaia and Naman. Neither ranked nor regular correlation show any significant effect. This means that these effects seem to be random, at least for most of them, is this an overfitting result?

I would say blue 1 is a line that was negative for all the tests I have so far seen. So this might be an interesting line. What to do next?

I would unblind the blue1, which is TH-D’. It was shown to be required for classical conditionning in shock and temperature learning (Galili et al. 2014). Another interesting observation is that th-g4+th-g80 seems to have like zero PI scores in all of the experiments (Naman and Gaia in the Tmaze, Joystick and Y-mazes). So could it be that all of these neurons have indeed a meaning, but is depending every time in the context?? Maybe Vanessa Ruta´s work might be interesting for that.

Print Friendly, PDF & Email

wiggle difference

on Monday, July 16th, 2018 3:26 | by

Below is a plot of all the flies of 18 genotypes for the wiggle difference. This is calculated by taking the sum of the difference of the tracepoint for each step. Thus, wiggle = sum(difference in tracepoint at each step). This is done for the entire 20 minutes time.

NOTE: The flies have not yet been separated into 2 categories based on pretest values.

Now we wanted to measure the difference in on wiggle and off wiggle. On wiggle is the wiggle for when the fly was in the part which is supposed to have light on and similarly off wiggle is the wiggle when light is supposed to be off(that is in the portion in which we want to train it to be in). So below is the difference of on wiggle and off wiggle i.e – on wiggle – off wiggle:-

mean of this wiggle difference :-

Print Friendly, PDF & Email

reinforcement(without subtracting pretest)

on | by

Below is given the plot of effect-sizes of reinforcement of 18 genotypes. On the y-axis are the PI values for learning effect sizes and this is without subtracting the pretest (without normalizing).  These scores are calculated by taking the average of PI values of training periods. We are just comparing reinforcement without normalizing with the previous post showing graphs after subtraction of pretest PI’s.

Reinforcement(without normalizing) = mean of training PI values.

Print Friendly, PDF & Email