Convolution Reverb

[ Wikipedia ]

Key ideas: approximating empulse, sine sweep, max. length sequences, white noise

The coefficients of a finite impulse response can then be generated as the inverse Fourier Transform of the cross-correlation of the output of the system with the auto-correlation of the input to the system. This is difficult in practice because such sequences are highly susceptible to distortion.[citation needed]

Electromyography

Though this is not a real service: its an interesting idea:

^This video was inspired by Myo, a “Gesture Control Wrist Band” from [ Thalmic Labs ]. I was reading through their job openings, and they are hiring a very interesting team, probably typical of what a start-up needs to launch a new product.

Their goal is to use gestures for controlling a variety of devices, so they are hiring people to work on different aspects of the band itself, and effectively outsourcing app development. Then they also have openings for a technical writer, and some marketing positions.

I am trying to understand the role that each of these people would fill.

Let’s start with the sensor R&D engineer and the data specialist. I think of these positions as the “brains” behind the product – the mentors, and the other positions as the students / interns. The R&D engineer might be focusing on the hardware blueprint (OrCAD and PCBs, lab equipment and analog/digital components, choosing the chips). These blueprints would be realized (and debugged) by the hardware engineer next, working with FPGAs/CPLDs/MCUs on PCBs and then doing evaluations. Then, we have an ARM Cortex M4 coming into the picture, and an embedded software engineer who can deploy some bluetooth (L2CAP, ATT, GATT) to get pre-processed sensor data to some computer (mobile, desktop). Its interesting how this team would get all this to fit on one band, while making the band  shock-resistant, and non-obtrusive, while keeping the power consumption and the noise levels on check.

Now the data has come in to whatever computer the band is interfaced with. So, the software engineer probably comes into the picture now, making a game-like application run. A lot of options here, but some game/graphics programming would probably be fun. For example, just to work with my Kinect, I’m familiarizing myself with OpenGL. WebGL and DirectX are other options for just the graphics. But, to have cooler elements and nice UI, game engines (Unity, Unreal) and UI frameworks (WPF, Cocoa, Qt) are also mentioned by Thalmic.

So, we went from sensors and embedded systems to a somewhat intense computer application. Did we skip something? Yes, gesture recognition! The machine learning team might be doing a lot of supervised learning, and then there’s the data specialist we mentioned earlier who might think through a database of examples (processed electromyography signals!) Now we need to maintain this database online (think Siri/wolframAlpha), so that we can keep expanding it with more data from the bands being used (perhaps in beta phase). Here, we need the front end engineer putting in a lot of JavaScript and NoSQL. There’s also the server infrastructure + DevOps engineer working on this website, and I am not really sure what Redis and Nginx are, but the goal seems to be expanding and maintaining an easily searchable relational database. Note that Thalmic might be providing the app developers with a lot of libraries for accessing and using this database, or maybe they’ll have that taken care by other software developers, writing the equivalent of libfreenect and greenfoot, for the developers to use directly.

And we also have the UI designer to make everything look smooth. Two more marketing positions for Google Analytics and targeted social media presence.

This leaves the developer evangelist, probably to communicate the interests of (out-sourced) developers with the rest of the team, while making a few demos!

I wonder if all start-up-like-companies in embedded systems have similar teams. They might just.

 

[Draft] Walking through glview.c (libfreenect)

Back to the kinect project. Working through an example / demo seems like a good starting point. Here, I took the demo from [ OpenKinect ], and my goal is to add comments / links (mostly pasted from Wikipedia, and Stack Overflow) on the purpose of each line / block in this code.

/* * This file is part of the OpenKinect Project. http://www.openkinect.org * * Copyright (c) 2010 individual OpenKinect contributors. See the CONTRIB file * for details. * * This code is licensed to you under the terms of the Apache License, version * 2.0, or, at your option, the terms of the GNU General Public License, * version 2.0. See the APACHE20 and GPL2 files for the text of the licenses, * or the following URLs: * http://www.apache.org/licenses/LICENSE-2.0 * http://www.gnu.org/licenses/gpl-2.0.txt * * If you redistribute this file in source form, modified or unmodified, you * may: * 1) Leave this header intact and distribute it under the same terms, * accompanying it with the APACHE20 and GPL20 files, or * 2) Delete the Apache 2.0 clause and accompany it with the GPL2 file, or * 3) Delete the GPL v2 clause and accompany it with the APACHE20 file * In all cases you must keep the copyright notice intact and include a copy * of the CONTRIB file. * * Binary distributions must follow the binary distribution requirements of * either License. */ #include <stdio.h> //C abstracts all file operations into operations on streams of bytes. [ C file i/o ] //ECE fun: s/w [ filters ] -> composition -> [ pipeline ]

#include <stdlib.h>
//  [ ANSI C standard library ] ⊂ [ C POSIX ]
// Digression: Stallman, [ The Right to Read ]
#include <string.h>
// manipulating null-terminated strings: copy/concatenate/tokenize/search
// [ C string handling ]
#include <assert.h>
// C preprocessor macro: assert() 
// [ Reekie tutorial ]: systematic debugging tool
// The argument to assert must be true when the macro is executed, 
// otherwise the program aborts and prints an error message.
// [ assert.h ]: verify assumptions made by the program and print a diagnostic message 
// if this assumption is false: __FILE__, __LINE__, __func__, the expression that failed (0)
#include "libfreenect.h"
// <  > vs "  ": something about where the compiler should search for this file
// [ StackExchange user ]: read your compiler/implementation's documentation 
// [ GCC ]: < system header file > vs "header files of your own program"
#include <pthread.h>
// On #including GL headers: Mac OS X vs "The Rest of the World"
#if defined(__APPLE__) 
//preprocessor conditional for OSX specific code. __APPLE__ and __MACH__.
  #include <GLUT/glut.h>
  //GLUT quotes: toolkit for writing small-medium sized OpenGL programs | well-suited for learning
  #include <OpenGL/gl.h>
  #include <OpenGL/glu.h> 
#else
  #include <GL/glut.h>
  #include <GL/gl.h>
  #include <GL/glu.h>
#endif
#include <math.h>
// [ overview of functions ] variety of types (often prefix): int, long, long long
// angles in radians
pthread_t freenect_thread;
// thread ID -- basically, an int used as a thread identifier [ source ]
// an abstract datatype used as a handle to reference the thread [ source ]
volatile int die = 0;

int g_argc;
char **g_argv;

int window;

pthread_mutex_t gl_backbuf_mutex = PTHREAD_MUTEX_INITIALIZER;

// back: owned by libfreenect (implicit for depth)
// mid: owned by callbacks, "latest frame ready"
// front: owned by GL, "currently being drawn"
uint8_t *depth_mid, *depth_front;
uint8_t *rgb_back, *rgb_mid, *rgb_front;
GLuint gl_depth_tex;
GLuint gl_rgb_tex;
GLfloat camera_angle = 0.0;
int camera_rotate = 0;
int tilt_changed = 0;
freenect_context *f_ctx;
freenect_device *f_dev;
int freenect_angle = 0;
int freenect_led;
freenect_video_format requested_format = FREENECT_VIDEO_RGB;
freenect_video_format current_format = FREENECT_VIDEO_RGB;
pthread_cond_t gl_frame_cond = PTHREAD_COND_INITIALIZER;
int got_rgb = 0;
int got_depth = 0;
void DrawGLScene(){
 pthread_mutex_lock(&gl_backbuf_mutex);

// When using YUV_RGB mode, RGB frames only arrive at 15Hz, so we shouldn't force them to draw in lock-step.
// However, this i-s CPU/GPU intensive when we are receiving frames in lockstep.
if (current_format == FREENECT_VIDEO_YUV_RGB) {
 while (!got_depth && !got_rgb) {
  pthread_cond_wait(&gl_frame_cond, &gl_backbuf_mutex);}}
else {
while ((!got_depth || !got_rgb) && requested_format != current_format) {
pthread_cond_wait(&gl_frame_cond, &gl_backbuf_mutex);
}
}

if (requested_format != current_format) {
pthread_mutex_unlock(&gl_backbuf_mutex);
return;
}

uint8_t *tmp;

if (got_depth) {
tmp = depth_front;
depth_front = depth_mid;
depth_mid = tmp;
got_depth = 0;
}
if (got_rgb) {
tmp = rgb_front;
rgb_front = rgb_mid;
rgb_mid = tmp;
got_rgb = 0;
}

pthread_mutex_unlock(&gl_backbuf_mutex);
glBindTexture(GL_TEXTURE_2D, gl_depth_tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, 640, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, depth_front);

if (camera_rotate) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
freenect_raw_tilt_state* state;
freenect_update_tilt_state(f_dev);
state = freenect_get_tilt_state(f_dev);
GLfloat x_accel_raw, x_accel,y_accel_raw,y_accel;
x_accel_raw = (GLfloat)state->accelerometer_x/819.0;
y_accel_raw = (GLfloat)state->accelerometer_y/819.0;

// sloppy acceleration vector cleanup
GLfloat accel_length = sqrt(x_accel_raw * x_accel_raw + y_accel_raw * y_accel_raw);
x_accel = x_accel_raw/accel_length;
y_accel = y_accel_raw/accel_length;
camera_angle = atan2(y_accel,x_accel)*180/M_PI -90.0;
}
else {
camera_angle = 0;
}

glLoadIdentity();
glPushMatrix();
glTranslatef((640.0/2.0),(480.0/2.0) ,0.0);
glRotatef(camera_angle, 0.0, 0.0, 1.0);
glTranslatef(-(640.0/2.0),-(480.0/2.0) ,0.0);
glBegin(GL_TRIANGLE_FAN);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glTexCoord2f(0, 1); glVertex3f(0,0,1.0);
glTexCoord2f(1, 1); glVertex3f(640,0,1.0);
glTexCoord2f(1, 0); glVertex3f(640,480,1.0);
glTexCoord2f(0, 0); glVertex3f(0,480,1.0);
glEnd();
glPopMatrix();

glBindTexture(GL_TEXTURE_2D, gl_rgb_tex);
if (current_format == FREENECT_VIDEO_RGB || current_format == FREENECT_VIDEO_YUV_RGB)
glTexImage2D(GL_TEXTURE_2D, 0, 3, 640, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, rgb_front);
else
glTexImage2D(GL_TEXTURE_2D, 0, 1, 640, 480, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, rgb_front+640*4);

glPushMatrix();
glTranslatef(640+(640.0/2.0),(480.0/2.0) ,0.0);
glRotatef(camera_angle, 0.0, 0.0, 1.0);
glTranslatef(-(640+(640.0/2.0)),-(480.0/2.0) ,0.0);

glBegin(GL_TRIANGLE_FAN);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glTexCoord2f(0, 1); glVertex3f(640,0,0);
glTexCoord2f(1, 1); glVertex3f(1280,0,0);
glTexCoord2f(1, 0); glVertex3f(1280,480,0);
glTexCoord2f(0, 0); glVertex3f(640,480,0);
glEnd();
glPopMatrix();
glutSwapBuffers();
}

void keyPressed(unsigned char key, int x, int y)
{
if (key == 27) {
die = 1;
pthread_join(freenect_thread, NULL);
glutDestroyWindow(window);
free(depth_mid);
free(depth_front);
free(rgb_back);
free(rgb_mid);
free(rgb_front);
// Not pthread_exit because OSX leaves a thread lying around and doesn't exit
exit(0);
}
if (key == 'w') {
freenect_angle++;
if (freenect_angle > 30) {
freenect_angle = 30;
}
tilt_changed++;
}
if (key == 's') {
freenect_angle = 0;
tilt_changed++;
}
if (key == 'f') {
if (requested_format == FREENECT_VIDEO_IR_8BIT)
requested_format = FREENECT_VIDEO_RGB;
else if (requested_format == FREENECT_VIDEO_RGB)
requested_format = FREENECT_VIDEO_YUV_RGB;
else
requested_format = FREENECT_VIDEO_IR_8BIT;
}
if (key == 'x') {
freenect_angle--;
if (freenect_angle < -30) {
freenect_angle = -30;
}
tilt_changed++;
}
if (key == 'e') {
static freenect_flag_value auto_exposure = FREENECT_ON;
freenect_set_flag(f_dev, FREENECT_AUTO_EXPOSURE, auto_exposure);
auto_exposure = auto_exposure ? FREENECT_OFF : FREENECT_ON;
}
if (key == 'b') {
static freenect_flag_value white_balance = FREENECT_ON;
freenect_set_flag(f_dev, FREENECT_AUTO_WHITE_BALANCE, white_balance);
white_balance = white_balance ? FREENECT_OFF : FREENECT_ON;
}
if (key == 'r') {
static freenect_flag_value raw_color = FREENECT_ON;
freenect_set_flag(f_dev, FREENECT_RAW_COLOR, raw_color);
raw_color = raw_color ? FREENECT_OFF : FREENECT_ON;
}
if (key == 'm') {
static freenect_flag_value mirror = FREENECT_ON;
freenect_set_flag(f_dev, FREENECT_MIRROR_DEPTH, mirror);
freenect_set_flag(f_dev, FREENECT_MIRROR_VIDEO, mirror);
mirror = mirror ? FREENECT_OFF : FREENECT_ON;
}
if (key == '1') {
freenect_set_led(f_dev,LED_GREEN);
}
if (key == '2') {
freenect_set_led(f_dev,LED_RED);
}
if (key == '3') {
freenect_set_led(f_dev,LED_YELLOW);
}
if (key == '4') {
freenect_set_led(f_dev,LED_BLINK_GREEN);
}
if (key == '5') {
// 5 is the same as 4
freenect_set_led(f_dev,LED_BLINK_GREEN);
}
if (key == '6') {
freenect_set_led(f_dev,LED_BLINK_RED_YELLOW);
}
if (key == '0') {
freenect_set_led(f_dev,LED_OFF);
}
if (key == 'o') {
if (camera_rotate) {
camera_rotate = 0;
glDisable(GL_DEPTH_TEST);
}
else {
camera_rotate = 1;
glEnable(GL_DEPTH_TEST);
}
}
if (tilt_changed) {
freenect_set_tilt_degs(f_dev, freenect_angle);
tilt_changed = 0;
}
}

void ReSizeGLScene(int Width, int Height)
{
glViewport(0,0,Width,Height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho (0, 1280, 0, 480, -5.0f, 5.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

}

void InitGL(int Width, int Height)
{
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
//glClearDepth(0.0);
//glDepthFunc(GL_LESS);
//glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glDisable(GL_ALPHA_TEST);
glEnable(GL_TEXTURE_2D);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glShadeModel(GL_FLAT);

glGenTextures(1, &gl_depth_tex);
glBindTexture(GL_TEXTURE_2D, gl_depth_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glGenTextures(1, &gl_rgb_tex);
glBindTexture(GL_TEXTURE_2D, gl_rgb_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

ReSizeGLScene(Width, Height);
}

void *gl_threadfunc(void *arg)
{
printf("GL thread\n");

glutInit(&g_argc, g_argv);

glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH);
glutInitWindowSize(1280, 480);
glutInitWindowPosition(0, 0);

window = glutCreateWindow("LibFreenect");

glutDisplayFunc(&DrawGLScene);
glutIdleFunc(&DrawGLScene);
glutReshapeFunc(&ReSizeGLScene);
glutKeyboardFunc(&keyPressed);

InitGL(1280, 480);

glutMainLoop();

return NULL;
}

uint16_t t_gamma[2048];

void depth_cb(freenect_device *dev, void *v_depth, uint32_t timestamp)
{
int i;
uint16_t *depth = (uint16_t*)v_depth;

pthread_mutex_lock(&gl_backbuf_mutex);
for (i=0; i<640*480; i++) {
int pval = t_gamma[depth[i]];
int lb = pval & 0xff;
switch (pval>>8) {
case 0:
depth_mid[3*i+0] = 255;
depth_mid[3*i+1] = 255-lb;
depth_mid[3*i+2] = 255-lb;
break;
case 1:
depth_mid[3*i+0] = 255;
depth_mid[3*i+1] = lb;
depth_mid[3*i+2] = 0;
break;
case 2:
depth_mid[3*i+0] = 255-lb;
depth_mid[3*i+1] = 255;
depth_mid[3*i+2] = 0;
break;
case 3:
depth_mid[3*i+0] = 0;
depth_mid[3*i+1] = 255;
depth_mid[3*i+2] = lb;
break;
case 4:
depth_mid[3*i+0] = 0;
depth_mid[3*i+1] = 255-lb;
depth_mid[3*i+2] = 255;
break;
case 5:
depth_mid[3*i+0] = 0;
depth_mid[3*i+1] = 0;
depth_mid[3*i+2] = 255-lb;
break;
default:
depth_mid[3*i+0] = 0;
depth_mid[3*i+1] = 0;
depth_mid[3*i+2] = 0;
break;
}
}
got_depth++;
pthread_cond_signal(&gl_frame_cond);
pthread_mutex_unlock(&gl_backbuf_mutex);
}

void rgb_cb(freenect_device *dev, void *rgb, uint32_t timestamp)
{
pthread_mutex_lock(&gl_backbuf_mutex);

// swap buffers
assert (rgb_back == rgb);
rgb_back = rgb_mid;
freenect_set_video_buffer(dev, rgb_back);
rgb_mid = (uint8_t*)rgb;

got_rgb++;
pthread_cond_signal(&gl_frame_cond);
pthread_mutex_unlock(&gl_backbuf_mutex);
}

void *freenect_threadfunc(void *arg)
{
int accelCount = 0;

freenect_set_tilt_degs(f_dev,freenect_angle);
freenect_set_led(f_dev,LED_RED);
freenect_set_depth_callback(f_dev, depth_cb);
freenect_set_video_callback(f_dev, rgb_cb);
freenect_set_video_mode(f_dev, freenect_find_video_mode(FREENECT_RESOLUTION_MEDIUM, current_format));
freenect_set_depth_mode(f_dev, freenect_find_depth_mode(FREENECT_RESOLUTION_MEDIUM, FREENECT_DEPTH_11BIT));
freenect_set_video_buffer(f_dev, rgb_back);

freenect_start_depth(f_dev);
freenect_start_video(f_dev);

printf("'w' - tilt up, 's' - level, 'x' - tilt down, '0'-'6' - select LED mode \n");
printf("'f' - change video format, 'm' - mirror video, 'o' - rotate video with accelerometer \n");
printf("'e' - auto exposure, 'b' - white balance, 'r' - raw color \n");

while (!die && freenect_process_events(f_ctx) >= 0) {
//Throttle the text output
if (accelCount++ >= 2000)
{
accelCount = 0;
freenect_raw_tilt_state* state;
freenect_update_tilt_state(f_dev);
state = freenect_get_tilt_state(f_dev);
double dx,dy,dz;
freenect_get_mks_accel(state, &dx, &dy, &dz);
printf("\r raw acceleration: %4d %4d %4d mks acceleration: %4f %4f %4f", state->accelerometer_x, state->accelerometer_y, state->accelerometer_z, dx, dy, dz);
fflush(stdout);
}

if (requested_format != current_format) {
freenect_stop_video(f_dev);
freenect_set_video_mode(f_dev, freenect_find_video_mode(FREENECT_RESOLUTION_MEDIUM, requested_format));
freenect_start_video(f_dev);
current_format = requested_format;
}
}

printf("\nshutting down streams...\n");

freenect_stop_depth(f_dev);
freenect_stop_video(f_dev);

freenect_close_device(f_dev);
freenect_shutdown(f_ctx);

printf("-- done!\n");
return NULL;
}

int main(int argc, char **argv)
{
int res;

depth_mid = (uint8_t*)malloc(640*480*3);
depth_front = (uint8_t*)malloc(640*480*3);
rgb_back = (uint8_t*)malloc(640*480*3);
rgb_mid = (uint8_t*)malloc(640*480*3);
rgb_front = (uint8_t*)malloc(640*480*3);

printf("Kinect camera test\n");

int i;
for (i=0; i<2048; i++) {
float v = i/2048.0;
v = powf(v, 3)* 6;
t_gamma[i] = v*6*256;
}

g_argc = argc;
g_argv = argv;

if (freenect_init(&f_ctx, NULL) < 0) {
printf("freenect_init() failed\n");
return 1;
}

freenect_set_log_level(f_ctx, FREENECT_LOG_DEBUG);
freenect_select_subdevices(f_ctx, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

int nr_devices = freenect_num_devices (f_ctx);
printf ("Number of devices found: %d\n", nr_devices);

int user_device_number = 0;
if (argc > 1)
user_device_number = atoi(argv[1]);

if (nr_devices < 1) {
freenect_shutdown(f_ctx);
return 1;
}

if (freenect_open_device(f_ctx, &f_dev, user_device_number) < 0) {
printf("Could not open device\n");
freenect_shutdown(f_ctx);
return 1;
}

res = pthread_create(&freenect_thread, NULL, freenect_threadfunc, NULL);
if (res) {
printf("pthread_create failed\n");
freenect_shutdown(f_ctx);
return 1;
}

// OS X requires GLUT to run on the main thread
gl_threadfunc(NULL);

return 0;
}

Mad-cow disease, Alzheimer’s, and nanosecond-memory-accesses?

Dr. Harvey mentioned some very cool things.

Infectious agents, basically, are transferring information. How do you copy information? Well, use a two-stranded molecule, split it up, and replicate (makes me think of bit inversion with bases!) That explains why Dr. Widrow would assume that “innate” memory – transferred along generations – might be R/DNA based (mild pun intended). But nucleic acids aren’t the only infectious agents around. Proteins (aha, neurons have a lot!) can misfold and cause a lot of problems. Can they be linked to memories, and by a long shot, to Alzheimer’s?

Prion ]: an infectious agent composed of protein in a misfolded form.[2] This is the central idea of the Prion Hypothesis, which remains debated.[3] This would be in contrast to all other known infectious agents, like virusesbacteriafungi or parasites—which must contain nucleic acids (either DNARNA, or both).

The Return of the Prion Hypothesis.

[ Jim Schnabel, Dana article, 2012 ]

Because the novel properties of the scrapie agent distinguish it from viruses, plasmids, and viroids, a new term “prion” is proposed to denote a small proteinaceous infectious particle which is resistant to inactivation by most procedures that modify nucleic acids [i.e., DNA or RNA]. Knowledge of the scrapie agent structure may have significance for understanding the causes of several degenerative diseases.

Stanley Prusiner, 1982 ]

Alzheimer’s features a slow accumulation of A-beta plaques throughout the brain | [ Beyreuther ] A-beta protein is a fragment—the remainder after a cutting by enzymes—of a much larger neuronal membrane protein…APP…coded by a gene on chromosome 21.

[ APP: an integral membrane protein expressed in many tissues and concentrated in the synapses of neurons. ]

APP-overexpressing mouse models…developed plaques, but did not develop the…neurofibrillary tangles, which are made of tau protein and appear inside affected neurons…Most families with APP mutations were found not to overexpress A-beta…but to overexpress a comparatively rare, aggregation-prone version, known as A-beta-42.

Were A-beta plaques the wrong target? Were the tau tangles the true culprits? Or were both these amyloids “red herrings”?

A-beta…, besides forming long, plaque-making fibril aggregates, can cluster into tiny, soluble “oligomers”

[ William Klein: memory-linked synapse structure and signal transduction ] [ talk ]

Researchers also found evidence that other amyloid-forming proteins, such as tau, Parkinson’s disease-related alpha synuclein protein, Huntington’s disease-related huntingtin protein, and even the PrP protein implicated in prion diseases, are toxic to neurons principally as oligomers, not as large amyloid fibrils.

These oligomers seem to share a common…“conformation” that makes them toxic somehow. A number of labs including Charles Glabe’s…have made conformation-specific antibodies that can recognize toxic oligomers of many of these proteins despite their very different amino-acid sequences.

…circumstantial evidence that the buildup of A-beta aggregation over decades eventually triggers tau aggregation—which represents the last, lethal stage of disease.

 

[ APP PDB ]

Linking APP and memory

Roles of amyloid precursor protein and its fragments in regulating neural activity, plasticity and memory ]

From the abstract:

Here we review evidence addressing these fundamental questions, paying particular attention to the contributions that APP fragments play in synaptic transmission and neural plasticity, as these may be key to understanding their effects on learning and memory. It is clear from this literature that APP fragments, including Aβ, can exert a powerful regulation of key neural functions including cell excitability, synaptic transmission and long-term potentiation, both acutely and over the long-term. Furthermore, there is a small but growing literature confirming that these fragments correspondingly regulate behavioral learning and memory. These data indicate that a full account of cognitive dysfunction in AD will need to incorporate the actions of the full complement of APP fragments.

More people

[ Louise T. Chow ]

Sci-fi section?

Anyway, if you were to design a memory architecture using proteins, how would you do it? Is that how biology does it? Are neurons collections of transistors and APP-based memory cells? Would such thoughts lead to a von-Neumann-like-bias in understanding, perhaps an alternative computation paradigm? Now where’s sic-fi when you need it…

 

Neural nets off of DNA o.O

Either something is really fishy about the universe’s current state, or I just feel like I’m suddenly seeing connections between the dots that have been floating around in my head (some even for a few semesters now). What’s going on?

I have been relying on the LMS algorithm for all the room identification processing.

Bernard Widrow and Ted Hoff wrote LMS. Dr. Widrow is interested in neural nets for cognitive memory. He assumes a DNA-*based* memory.

Lulu Qian is making neural nets with DNA. Her lab runs macromolecular simulations (Dr. Harvey!), and she also mentions Hopfield nets (Dr. Anderson!)

The Qian Lab participates in the SURF program. Whoa.

Dr. Qian completed her graduate work at Southeast. That’s where Liang Chen has been working on HRIRs.

My brain’s pattern recognition hardware just slammed hard on the brakes. Robotic musicians, a kinnect, and Chrome-HRIR are also lurking around somewhere underneath an MRI machine, a DSP (on a CAN bus), and a GPU’s Si. WHAT IS HAPPENING.

And I should get back to reading about pn junctions.

W̶a̶l̶k̶i̶n̶g̶ ̶d̶o̶w̶n̶ ̶t̶h̶e̶ ̶m̶e̶m̶o̶r̶y̶ ̶ penny lane there’s a barber showing photographs

This post is about Dr. Widrow’s presentation on cognitive memory:

Key assumptions:

  • All forms of memory (acquired and innate – like the ROM of walking) are stored and retrieved via the same mechanisms
  • Memory “folders” in the form of macromolecules (D/RNA)
  • Associative neural nets for memory retrieval

Interesting questions on the last slide.

Three for loops sounds really wrong. Try again!

%import = imread('coldplay_viva1.jpg');
subplot(121)
    imshow(import)

import = double(import);

w = 51;
if mod(w,2)==1
    fprintf(2,'Choose an odd number')
end
scale = 1/w;
filter = scale*ones(w,w);
[rows cols tree] = size(import);

filtered_out = zeros(rows-w, cols-w, tree);
for layer = 1:tree
    for row_ndx = 1:w:rows-w
        for col_ndx = 1:w:cols-w
window = import(row_ndx:row_ndx+w-1, col_ndx:col_ndx+w-1, layer); 
filtered_out(row_ndx, col_ndx, layer) = sum(sum(filter.*window));
        end
    end
end

filtered_out = uint8(filtered_out);
subplot(122)
    imshow(filtered_out)

10 second windows, 2% in 2 Hz

From the [ Brain User Interface Group ]:

2. Tecchio, F., Salustri, C., Thaut, M. H., Pasqualetti, P., & Rossini, P. M. (2000). Conscious and preconscious adaptation to rhythmic auditory stimuli: a magnetoencephalographic study of human brain responses. Springer-Verlag Research Article.

a. Summary: The auditory cortex is shown through experiments to discriminate time characteristics of incoming rhythmic stimuli. Findings suggest that the auditory cortex contribute directly to synchronization of motor output in particular the connection between “thalamic projections shared with supplementary motor area, as observed in musicians tapping different rhythms.” Also suggested that local sensory memory retains audio information for approx. 10s and is sensitive “of at least 2% in the 2Hz rhythmicity.”

 

7. Rosenboom, D. (1990). The Performing Brain. Computer Music Journal, 14(1), Spring 1990, 48-66.

a. Summary: Historical references from the 1970’s discussing spontaneous generation of music. Expresses criticism for biomouse and EEG methods to create music. Describes SQUID (superconducting quantum interference devices) as able to detect localized neuromagnetic fields as promising. {Squid’s seem to be an arrayed version our the NSI electrode}

11. The Structure of emotion : psychophysiological, cognitive and clinical aspects / edited by Niels Birbaumer, Arne Ohman. Author: Call Number: BF531 .S84 1993. Year: 1993.