We do not seem to reproduce each other results

on Tuesday, August 14th, 2018 2:40 | by

Since we do not reproduce each others results, together with the previous post with the bootstrapping I can confirm that these neurons do not have an effect in reinforcement (in general). But we will focus on TH-D’

Residuals:
Min 1Q Median 3Q Max
-0.26964 -0.15074 0.07699 0.10043 0.24611

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.05663 0.12282 0.461 0.661
new_Christian$mean 1.07180 0.56922 1.883 0.109

Residual standard error: 0.1943 on 6 degrees of freedom
Multiple R-squared: 0.3714, Adjusted R-squared: 0.2667
F-statistic: 3.545 on 1 and 6 DF, p-value: 0.1087

 

Residuals:
Min 1Q Median 3Q Max
-0.43451 -0.10793 0.02633 0.14283 0.28667

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.05975 0.03595 -1.662 0.1101
new_Gaia$mean 0.33806 0.19627 1.722 0.0984 .

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.1787 on 23 degrees of freedom
Multiple R-squared: 0.1142, Adjusted R-squared: 0.07574
F-statistic: 2.967 on 1 and 23 DF, p-value: 0.09842

Print Friendly, PDF & Email

Bootstrapping NorpA flies without G4

To see if there is an effect of the activation of these neurons in general, I thought of bootstrapping all the flies containing NorpA without G4s to see if the statistics are similar to that of my screen.

 

here the final screen

Here a total of 37 Tmaze experiments containing NorpA with UAS-GtACRs, UAS-Chrimson or UAS-ChR2XXL but without G4. Experiments are from Naman, Gaia and me, here the pooled effect:

Here the barplot of the result of  12 samples (with repetitions) for 20 times. Considering 37 experiments are closer to the real true distribution of NorpA flies, we sample from them to observe the probability of obtaining false positives, as well as the distributions.

 

Here the boxplotFrom this results I would deduce that most of these neurons have no effect on their reinforcement. This idea came because I saw that the phenotype scores of th-G4+th-G80>Chrimson was always close to zero in all of the behavioral setups. I thought that this could show that the dopaminergic neurons have an effect that is context dependent and that is why PIs might be more extreme than with the negative control (th-G4+th-G80>Chrimson).

 

This is a quick edit to see how it would look with 32 lines (the same number as for the real screen)

Print Friendly, PDF & Email

Fussl shows numerical difference in operant self learning

on Tuesday, August 7th, 2018 2:49 | by

Fussl was crossed with either Stinger (ctrl) or a UAS-TNT line to block the synaptric transmission of the Fussl positive neurons. A third construct was used but did not yield any data due to difficulties with their flight performance. The Fussl-Stinger along with Fussl-TNT flies do also show difficulties in flying. These differences will be assessed.

The experiment was done as a pilot experiment before doing a larger scale.

The data is a bit inconsistent but shows a positive and reassuring numerical difference. The control is a bit lower than expected, compared to WTB flies (showing usually a PI 0f 0.6). The flies have a slightly different background than wtb flies and have pale orange eyes (still no apparent impairments in vision). Further experiments will be conducted before proceeding with a larger sample size of the flies.

 

Print Friendly, PDF & Email

Joystick Update

on Monday, July 30th, 2018 2:08 | by

Assessing the difficulties in self-learning for FoxP flies

on | by

FoxP3955 flies were raised and compared to normal WTB flies. Reportedly, the Foxp mutants have a reduced flight performance as their total flight duration is decreased. This was also something I experienced. The problem seemed to be greater due to the heat in the flight simulator room, initial temperature was 27°C but increased to close to 30°C. I had troubles getting a large sample size enough (same number of Foxp and wtb were loaded into the flight simulator), heat-shock proteins and other stress-related behavior might be an issue. The genotype of the flies were known during the hooking of the flies but was later on concelead and flies were randomly distributed.  

Print Friendly, PDF & Email

Stroklitude Testing Pt. 2

Data 1:

Monica:

Anokhi:

Print Friendly, PDF & Email

Role of dopaminergic neurons in operant behaviour

on Friday, July 27th, 2018 3:54 | by

Positive Control: Gr28bd-G4, TrpA1-G4

Parameters: Light: intensity (500 Lux side, 1000 Lux bottom); frequency = 20Hz; Delay = 1 ms; Duration = 9.9 ms; volts = 6.4

Red lines: completed

mb025b: not selected against tubby

Print Friendly, PDF & Email

Foxp-isoB-Gal4 Brains and Ventral nerve chord (VNC)

on | by

cd8 GFP + Bruchpilot

  

Bruchpilot

Stinger GFP + REPO + ELAV

Print Friendly, PDF & Email

Category: Foxp | No Comments

Mean trace of all flies and how degrees of freedom vary over learning

on Monday, July 23rd, 2018 6:40 | by

Mean trace of the positive control in the Joystick to get to see what are the overall dynamics and maybe to get an idea what might be the best score to pick.Here the standard deviation of the flies along the time axis. This is just to see if all the flies have more similar phenotypes with each other or not at each time.

This is to see if the flies have less degrees of freedom at any segment by measuring the standard deviation at each segment. There does not seem to be any effect. Although this might be mixed with the wiggle scores. I think measuring entropy is a better measure.

 

All the same plots as above but for TH-D’, the interesting line from the screen.

 

Standard deviation across flies

Standard deviation across segments

Print Friendly, PDF & Email

Performance index for modelling for data in the Y-mazes

This are the performance indices for the different models performed to estimate the valence of the dopaminergic clusters. AIC: Akaike Information Criteria; BIC: Bayesian information Criteria; LogLikelihood: log Likelihood estimation

lm: linear model

+ int: taking double interaccions into consideration

b lm: bayesian linear model with bayesglm function

b lm MCMC: bayesian linear model with MCMCglm function

nlm: nonlinear model with lm function with splines fitted

b nlm: splines fitted to each cluster and MCMCglm function

GAM: general additive model with gam function

 

Adding double interactions seems to produce better models, nonlinearities also make models better and frequentist also. To me it seems like this  data might be noise and therefore adding interactions, nonlinearities and frequentist methods is just fitting the noise better (overfitting) and that is why I get better scores with them. In addition, care needs to be taken since I use different functions that calculate the model performance scores differently (although the formulas are theoretically the same for all!)

Print Friendly, PDF & Email