## Joystick Update

on Monday, July 30th, 2018 2:08 | by Amanda Torres

Category: Lab, Operant learning, Operant reinforcment, operant self-learning, Optogenetics | No Comments

## Role of dopaminergic neurons in operant behaviour

on Friday, July 27th, 2018 3:54 | by Gaia Bianchini

Positive Control: Gr28bd-G4, TrpA1-G4

Parameters: Light: intensity (500 Lux side, 1000 Lux bottom); frequency = 20Hz; Delay = 1 ms; Duration = 9.9 ms; volts = 6.4

Red lines: completed

mb025b: not selected against tubby

## Mean trace of all flies and how degrees of freedom vary over learning

on Monday, July 23rd, 2018 6:40 | by Christian Rohrsen

Mean trace of the positive control in the Joystick to get to see what are the overall dynamics and maybe to get an idea what might be the best score to pick.Here the standard deviation of the flies along the time axis. This is just to see if all the flies have more similar phenotypes with each other or not at each time.

This is to see if the flies have less degrees of freedom at any segment by measuring the standard deviation at each segment. There does not seem to be any effect. Although this might be mixed with the wiggle scores. I think measuring entropy is a better measure.

All the same plots as above but for TH-D’, the interesting line from the screen.

## Performance index for modelling for data in the Y-mazes

on | by Christian Rohrsen

This are the performance indices for the different models performed to estimate the valence of the dopaminergic clusters. AIC: Akaike Information Criteria; BIC: Bayesian information Criteria; LogLikelihood: log Likelihood estimation

lm: linear model

+ int: taking double interaccions into consideration

b lm: bayesian linear model with bayesglm function

b lm MCMC: bayesian linear model with MCMCglm function

nlm: nonlinear model with lm function with splines fitted

b nlm: splines fitted to each cluster and MCMCglm function

GAM: general additive model with gam function

Adding double interactions seems to produce better models, nonlinearities also make models better and frequentist also. To me it seems like this data might be noise and therefore adding interactions, nonlinearities and frequentist methods is just fitting the noise better (overfitting) and that is why I get better scores with them. In addition, care needs to be taken since I use different functions that calculate the model performance scores differently (although the formulas are theoretically the same for all!)

## reinforcement scores

on | by Saurabh Bedi

Below is given the plot of effect-sizes of reinforcement of 30 genotypes. On the y-axis are the PI values for learning effect sizes. These scores are calculated by taking the average of PI values of training periods and then subtracting pretest PI values from it.

Reinforcement scores = mean of training score – pretest PI score

Category: Operant learning, Optogenetics, Uncategorized | No Comments

## The Tmaze Experiments : Screen results as on 22-7-18

on Sunday, July 22nd, 2018 6:41 | by Naman Agrawal

Yellow 1 (Positive Control): Gr28bd-G4, TrpA1-G4

Parameters: Light: intensity (500 Lux side, 1000 Lux bottom); frequency = 20Hz; Delay = 1 ms; Duration = 9.9 ms; volts = 6.4

Category: neuronal activation, open science, Operant learning, Optogenetics | No Comments

## Finding the interesting lines

on Friday, July 20th, 2018 3:46 | by Christian Rohrsen

This is the correlation from the T-maze experiments from Gaia and Naman. Neither ranked nor regular correlation show any significant effect. This means that these effects seem to be random, at least for most of them, is this an overfitting result?

I would say blue 1 is a line that was negative for all the tests I have so far seen. So this might be an interesting line. What to do next?

I would unblind the blue1, which is TH-D’. It was shown to be required for classical conditionning in shock and temperature learning (Galili et al. 2014). Another interesting observation is that th-g4+th-g80 seems to have like zero PI scores in all of the experiments (Naman and Gaia in the Tmaze, Joystick and Y-mazes). So could it be that all of these neurons have indeed a meaning, but is depending every time in the context?? Maybe Vanessa Ruta´s work might be interesting for that.

## wiggle difference

on Monday, July 16th, 2018 3:26 | by Saurabh Bedi

Below is a plot of all the flies of 18 genotypes for the wiggle difference. This is calculated by taking the sum of the difference of the tracepoint for each step. Thus, wiggle = sum(difference in tracepoint at each step). This is done for the entire 20 minutes time.

NOTE: The flies have not yet been separated into 2 categories based on pretest values.

Now we wanted to measure the difference in on wiggle and off wiggle. On wiggle is the wiggle for when the fly was in the part which is supposed to have light on and similarly off wiggle is the wiggle when light is supposed to be off(that is in the portion in which we want to train it to be in). So below is the difference of on wiggle and off wiggle i.e – on wiggle – off wiggle:-

mean of this wiggle difference :-

Category: Operant learning, Optogenetics, Uncategorized | No Comments

## reinforcement(without subtracting pretest)

on | by Saurabh Bedi

Below is given the plot of effect-sizes of reinforcement of 18 genotypes. On the y-axis are the PI values for learning effect sizes and this is without subtracting the pretest (without normalizing). These scores are calculated by taking the average of PI values of training periods. We are just comparing reinforcement without normalizing with the previous post showing graphs after subtraction of pretest PI’s.

Reinforcement(without normalizing) = mean of training PI values.

Category: Operant learning, Optogenetics, Uncategorized | No Comments

## reinforcement(after subtracting pretest)

on | by Saurabh Bedi

Below is given the plot of effect-sizes of reinforcement of 18 genotypes. On the y-axis are the PI values for learning effect sizes. These scores are calculated by taking the average of PI values of training periods and then subtracting pretest PI values from it.

Reinforcement scores = mean of training score – pretest PI score

Now below are the mean values of the reinforcements calculated for these 18 genotypes

Category: Operant learning, Optogenetics | No Comments