Yes, after a long winter, it's time to start working on the car again.
Batteries from Deka - which has a manufacturing site and warehouse nearby, so I just drove down and bought some batteries right off the loading bay.
I mounted the motor controller (curtis) and bought a great (I think) deep cycle charger from Noco, with six individual charging circuits.
Battery box is custom, using aluminum angle iron creatively cut up.
Now - working on all the interior wiring and hooking up the last item - a 72 volt to 12 volt converter.One problem - my multitester reads voltage between the battery positive and the battery box... meaning if anything touchs both it'll immolate in a shower of sparks.
Note to others - wear eye protection when working on the battery box!
Anyhooo - something is very wrong, so I am tearing it out and investigating.
Sunday, June 19, 2011
Saturday, January 9, 2010
Fiat 600 has transmission and motor mounted
update on my Electric car project:
Ray and Laurie at Performance apex have made a lot of progress mounting the motor and getting the transmission hooked up.
I believe we'll have three gears -
two forward and one back.
Since the motor controller also has a reverse that should be more than enough!
Check out the custom motor mounting hardware also providing some rigidity to the back end.
The rubber mounts are from a lotus.
The connection from the keyed shaft on the motor to the transmission will go through a tractor coupling with a rubber torque piece allowing for some flex - that should make things smoother than a hard coupling.
It's not installed in this picture..
Here are two pix of the motor mount structure. It is also supported in the front by the transmision, of course. Note the straps holding the transmission onto the frame.
That's Ray's hand!
Here you can see the lotus rubber mounts, the speed controller on the motor and the UVW connections for the three phase AC controller.
The suspension parts are looking really good... stopping should be no problem with the new disc brakes on the front. the Drums on the rear are replaced.....
Of course the exterior and interior work has not been done yet - it looks a bit rough.
Ray and Laurie at Performance apex have made a lot of progress mounting the motor and getting the transmission hooked up.
I believe we'll have three gears -
two forward and one back.
Since the motor controller also has a reverse that should be more than enough!
Check out the custom motor mounting hardware also providing some rigidity to the back end.
The rubber mounts are from a lotus.
The connection from the keyed shaft on the motor to the transmission will go through a tractor coupling with a rubber torque piece allowing for some flex - that should make things smoother than a hard coupling.
It's not installed in this picture..
Here are two pix of the motor mount structure. It is also supported in the front by the transmision, of course. Note the straps holding the transmission onto the frame.
That's Ray's hand!
Here you can see the lotus rubber mounts, the speed controller on the motor and the UVW connections for the three phase AC controller.
The suspension parts are looking really good... stopping should be no problem with the new disc brakes on the front. the Drums on the rear are replaced.....
Of course the exterior and interior work has not been done yet - it looks a bit rough.
Monday, May 4, 2009
How safe are US passports?
How safe is your US passport.... before you even get it? Imagine ending up on the no-fly list forever because someone lost the box your passport was in.
I was travelling back from LA just before Christmas and the airport was of course a nightmare... waiting for my bag, I spotted this forlorn, lonely looking plastic bagged package sitting alone by the side of the baggage carousel, as if abandoned.
Looking closer, through the little handle cutouts in the cardboard, I saw what looked a lot like passports. Could it be?
I took a few pictures, and then alerted an airline worker that there seemed to be a pile of US passports sitting out unattended... and that might be a national security risk.
She got flustered and aggressive right away, so I beat it before I ended up in Guantanamo.
So - what can we tell from the photos?
AIR NET
1-800-824-6058 - comes back to Airnet.com contact page.
The Destination was
Seattle Passport Agency
Federal Office Building, Room
915 Second Ave
Seattle, WA 98174
My iphone timestamps the picture:
Shot 10/19/2008 about 10:19 PM.
I popped the tracking info into the helpful tracking link on the airnet site.
package id 5514109007
tracking at: http://www.airnet.com/Tracking/shiplink.htm
Hmmm... that's odd! The tracking info says the package was supposed to go from Burbank to Boing Field, and that it arrived on time. But it also says the pickup occurred at 12/19 10:52pm
Which is 33 minutes AFTER I took the picture, and at that point the passports were in Sea-Tac, sitting in the baggage claim, not awaiting pickup near Burbank.
I wonder if someone modified the records on this tracking slip? ;=)
The Airnet site says.....
"For the most critical transportation of cargo or passengers, AirNet delivers – any location, at any time.
Founded in 1974, we began transporting checks and other time-critical, valuable documents for the nation’s banking industry. .......
AirNet handles your shipments with urgency, precision, and attention to detail every step of the way. Our proven reliability in the most demanding situations has earned the trust of thousands of clients.
"
I was travelling back from LA just before Christmas and the airport was of course a nightmare... waiting for my bag, I spotted this forlorn, lonely looking plastic bagged package sitting alone by the side of the baggage carousel, as if abandoned.
Looking closer, through the little handle cutouts in the cardboard, I saw what looked a lot like passports. Could it be?
I took a few pictures, and then alerted an airline worker that there seemed to be a pile of US passports sitting out unattended... and that might be a national security risk.
She got flustered and aggressive right away, so I beat it before I ended up in Guantanamo.
So - what can we tell from the photos?
AIR NET
1-800-824-6058 - comes back to Airnet.com contact page.
The Destination was
Seattle Passport Agency
Federal Office Building, Room
915 Second Ave
Seattle, WA 98174
My iphone timestamps the picture:
Shot 10/19/2008 about 10:19 PM.
I popped the tracking info into the helpful tracking link on the airnet site.
package id 5514109007
tracking at: http://www.airnet.com/Tracking/shiplink.htm
Hmmm... that's odd! The tracking info says the package was supposed to go from Burbank to Boing Field, and that it arrived on time. But it also says the pickup occurred at 12/19 10:52pm
Which is 33 minutes AFTER I took the picture, and at that point the passports were in Sea-Tac, sitting in the baggage claim, not awaiting pickup near Burbank.
I wonder if someone modified the records on this tracking slip? ;=)
The Airnet site says.....
"For the most critical transportation of cargo or passengers, AirNet delivers – any location, at any time.
Founded in 1974, we began transporting checks and other time-critical, valuable documents for the nation’s banking industry. .......
AirNet handles your shipments with urgency, precision, and attention to detail every step of the way. Our proven reliability in the most demanding situations has earned the trust of thousands of clients.
"
Friday, February 27, 2009
bleeding edge HA mysql cluster
I just got back from the Mysql High Availability class in Denver (thanks for the class, George!)
and it's time to implement HA mysql at Mogreet.
I'm going to start with a test setup on my dev box, to see what issues present themselves between getting, compiling, installing and then porting the Mogreet DBs. I'll blog it in case it helps anyone.
First - getting the most recent code.
The generally available version of cluster is 6.3, but there are serious improvements in 6.4, including the ability to add nodes while the cluster is up. So I'm going for the bleeding edge beta version, which is mysql-5.1.32-ndb-6.4.3 as of 2/23/09 - about four days ago. MMm - fresh software!
You can browse the current mysql 5.1.32 and cluster 6.4 snapshots here:
Here's the link:
Here are my configure flags:
And now it's compile time.
startig about 10:08am... finished 10:32.
There are errors with ndb:
make[4]: *** [ndb_mgmd] Error 1
So I'll install and see what did/did not get built.
and it's time to implement HA mysql at Mogreet.
I'm going to start with a test setup on my dev box, to see what issues present themselves between getting, compiling, installing and then porting the Mogreet DBs. I'll blog it in case it helps anyone.
First - getting the most recent code.
The generally available version of cluster is 6.3, but there are serious improvements in 6.4, including the ability to add nodes while the cluster is up. So I'm going for the bleeding edge beta version, which is mysql-5.1.32-ndb-6.4.3 as of 2/23/09 - about four days ago. MMm - fresh software!
You can browse the current mysql 5.1.32 and cluster 6.4 snapshots here:
ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/
Here's the link:
ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/mysql-5.1.32-ndb-6.4.3
Here are my configure flags:
./configure \
--prefix=/usr/mogreet_distro/mysql-5.1.32-beta-ndb \
--enable-assembler \
--enable-profiling \
--with-client-ldflags=-all-static \
--with-mysqld-ldflags=-all-static \
--with-fast-mutexes \
--enable-local-infile \
--disable-grant-options \
--with-ssl \
--with-innodb \
--with-plugins=federated,heap,innobase,myisam,ndbcluster,blackhole,archive \
--with-ndb-test --with-ndb-docs
And now it's compile time.
startig about 10:08am... finished 10:32.
There are errors with ndb:
make[4]: *** [ndb_mgmd] Error 1
So I'll install and see what did/did not get built.
Sunday, February 15, 2009
My EV Project: Fiat 600
My Electric vehicle project is picking up speed. Pun intended. The project car - a 1964 Fiat 600, is at Ray's (Performance Apex) getting a rebuild on the brakes and suspension. Important stuff I don't want to do.
Next step is picking the motor and controller.
I've decided to go new-school, and use a 3 phase AC induction motor. In the past years this sort of setup was out of reach of hobbyists, but Curtis has a new PMC AC motor controller (with regeneration) that makes it possible.
Right now I need to pick the motor size.
Here are the specs on the original motor(s) that were in the vehice:
hiperformance golf cars
electric motorsports
thunderstruck motors
They all seem to have comparable setups.
If I could get away with this one:
the next step up would be the same motor and controller, tuned up to run more volts:
There seems to be a question of how much heat build up there would be, and at what portion of full power the car would be running. That might suggest getting a bigger motor. But I think I'll try the smaller one first, given the light weight of the car.
##update...
I visited Ray and Lorrie and took a look at the progress. Ray has found and built new parts for the critical areas of the suspension. This little 60o now has four wheel disk brakes (From a Scorpion, which is handy), new wheel bearings, bushings, tie rods, and more.
Next step is picking the motor and controller.
I've decided to go new-school, and use a 3 phase AC induction motor. In the past years this sort of setup was out of reach of hobbyists, but Curtis has a new PMC AC motor controller (with regeneration) that makes it possible.
Right now I need to pick the motor size.
Here are the specs on the original motor(s) that were in the vehice:
633 cc straight-4 OHV, 21 hp Peak ( 15.66 Kw)Not much motor. Which is good. There seem to be three AC outfits selling what seem to be the exact same Curtis Controller and AC motor matched setup. Noone seems to know who makes the motors:
767 cc straight-4 OHV, 29 hp ( 21.63 Kw)
hiperformance golf cars
electric motorsports
thunderstruck motors
They all seem to have comparable setups.
If I could get away with this one:
AC-13 1238 36-48V 650 amp 26hp 90ftlbs 6000rpm $2900 ( 21.63Kw/ 121.89Nm)Which very closely matches the power ratings of the larger of the two original engines, I could get away with less batteries, like maybe 12 cells of 3.2volts each. That might put LiPo batteries in reach!
the next step up would be the same motor and controller, tuned up to run more volts:
AC-15 1238 48-84V 550 amp 46hp 105ftlbs 7500rpm $3200 (34.3Kw/ 142.20Nm)More than enough power.
There seems to be a question of how much heat build up there would be, and at what portion of full power the car would be running. That might suggest getting a bigger motor. But I think I'll try the smaller one first, given the light weight of the car.
##update...
I visited Ray and Lorrie and took a look at the progress. Ray has found and built new parts for the critical areas of the suspension. This little 60o now has four wheel disk brakes (From a Scorpion, which is handy), new wheel bearings, bushings, tie rods, and more.
Tuesday, February 10, 2009
Working with Pound and Logging
One of the most frustrating parts of the Pound reverse proxy has been getting the logging to work.
The man page indicates that you can direct to standard output for testing, but that means you lose the logging once you close the terminal. Not good for production.
So I struggled to find a better option for our production environment: OSX 10.4.
My first thought was to leave the pound configuration set to log to std out, and send the std out and std err to a file. That looked like this:
#the pound.cfg file:
LogLevel 2
LogFacility -
# make a file to log to
root# touch /var/tmp/pound.log
#make it writeable
root# chmod ugo+rw pound.log
# run pound with output redirection
root# pound -v -f pound.cfg >> /var/tmp/pound.log 2>&1
--But surprisingly, that didn't work well. A few lines made it into the log, then no more. Odd. Gremlins.
So I did it the right way: through the syslogd facilities.
Pound (and lots of other daemons) can direct stdout and stderr to the syslogd, and each daemon can tag it's own messages with two bits of info: a 'Facility' and a 'severity'. The Facility is just a name.
So in the pound.cfg file, set the LogFacility. local4 is a name I made up. It could be "pound4" instead.
#the pound.cfg file:
LogLevel 2
LogFacility local4
Now configure the syslogd to actually listen for the pound logging on local4 by editting the /etc/syslog.conf, and adding this line:
local4.* /var/log/pound.log
Which tells syslogd to listen to all messages from local4 and direct them to the file at /var/log/pound.log.
Next step - restart the syslogd so it rereads the config file. On the Mac you do this by unloading it's plist and then reloading it.
root# launchctl unload /System/Library/LaunchDaemons/com.apple.syslogd.plist
root# launchctl load /System/Library/LaunchDaemons/com.apple.syslogd.plist
And monitor the pound.log:
root# tail-50f /var/log/pound.log
And get.... not much. Turns out that the default filter on the syslogd doesn't show the debug level needed to see the pound messages.
check your level like this:
root# syslog -c 0
It probably says:
Master filter mask: Off
Meaning that the master override filter is not doing anything.
Turn it all the way up temporarily to see the logging messages:
root# syslog -c 0 -d
which turns on the debug level of logging.
Now you should see your pound logging - assuming pound is running now and something is generating traffic.
After it all looks good you should turn off the master filter, and configure a facility
filter just for pound.
Check the syslogd man page....
man syslogd
The man page indicates that you can direct to standard output for testing, but that means you lose the logging once you close the terminal. Not good for production.
So I struggled to find a better option for our production environment: OSX 10.4.
My first thought was to leave the pound configuration set to log to std out, and send the std out and std err to a file. That looked like this:
#the pound.cfg file:
LogLevel 2
LogFacility -
# make a file to log to
root# touch /var/tmp/pound.log
#make it writeable
root# chmod ugo+rw pound.log
# run pound with output redirection
root# pound -v -f pound.cfg >> /var/tmp/pound.log 2>&1
--But surprisingly, that didn't work well. A few lines made it into the log, then no more. Odd. Gremlins.
So I did it the right way: through the syslogd facilities.
Pound (and lots of other daemons) can direct stdout and stderr to the syslogd, and each daemon can tag it's own messages with two bits of info: a 'Facility' and a 'severity'. The Facility is just a name.
So in the pound.cfg file, set the LogFacility. local4 is a name I made up. It could be "pound4" instead.
#the pound.cfg file:
LogLevel 2
LogFacility local4
Now configure the syslogd to actually listen for the pound logging on local4 by editting the /etc/syslog.conf, and adding this line:
local4.* /var/log/pound.log
Which tells syslogd to listen to all messages from local4 and direct them to the file at /var/log/pound.log.
Next step - restart the syslogd so it rereads the config file. On the Mac you do this by unloading it's plist and then reloading it.
root# launchctl unload /System/Library/LaunchDaemons/com.apple.syslogd.plist
root# launchctl load /System/Library/LaunchDaemons/com.apple.syslogd.plist
And monitor the pound.log:
root# tail-50f /var/log/pound.log
And get.... not much. Turns out that the default filter on the syslogd doesn't show the debug level needed to see the pound messages.
check your level like this:
root# syslog -c 0
It probably says:
Master filter mask: Off
Meaning that the master override filter is not doing anything.
Turn it all the way up temporarily to see the logging messages:
root# syslog -c 0 -d
which turns on the debug level of logging.
Now you should see your pound logging - assuming pound is running now and something is generating traffic.
After it all looks good you should turn off the master filter, and configure a facility
filter just for pound.
Check the syslogd man page....
man syslogd
Sunday, February 8, 2009
wrapping MYSQL queries in Memcached
One huge opportunity for scalability and speed improvements in almost every web app involves reducing the number of requests for dynamic data that must be directed to the back end database server. A great way to do that (for data that is not transactional) is to put in a memcached server, and use it 'on top' of the Mysql DB.
A memcached server is a brilliant piece of open-source engineering from the fine folks at danga.
Memcached is a distributed hash table. You create a key, composed of text, and give it some data to store with that key. You may then GET the data back by giving memcached the same key. It's that simple. Because the data is always RAM resident, and memcached does no complicated indexing when you add elements, inserting and retrieving elements scales nicely.
The basic idea of wrapping the mysql queries in the memcached is this:
1) When inserting or updating records in the DB, also add a memcached entry, with an expiry period, like say, 1 minute.
2) Then when the app needs to recover information, first check the memcached. If it's there, use it, and do not bother the database. If it's not in the memcached (it expired, r has not been set yet) look in the db for the data, and refer back to item 1) above.
Here is a ruby example that very flexibly stores just about any MySQL query.
not printed here are the mysql handle definition (standard) and the code for the dbupdate! method.
The dbupdate! method is a wrapper for the mysql select and update syntax, so that if a record exists, it will be updated, and if it does not exist, it will be added.
Ok - here are the three total memcached methods:
So - all that is required is a handle to the memcached, and a get method and a put method.
Now here are the two corresponding Mysql methods: wrapget and wrapput.
One interesting thing to note is that I convert both the insert/update and the select from a MYSQL::RESULT to an Array of hashes like this [{name=>"anthony"}{name=>"musetta"}] which mimics the structure of the mysql result set. This makes the cached result more consistent. And you could read the caches with nodes that have not included the 'mysql.rb' driver, and don't have the compiled mysql client.
A memcached server is a brilliant piece of open-source engineering from the fine folks at danga.
Memcached is a distributed hash table. You create a key, composed of text, and give it some data to store with that key. You may then GET the data back by giving memcached the same key. It's that simple. Because the data is always RAM resident, and memcached does no complicated indexing when you add elements, inserting and retrieving elements scales nicely.
The basic idea of wrapping the mysql queries in the memcached is this:
1) When inserting or updating records in the DB, also add a memcached entry, with an expiry period, like say, 1 minute.
2) Then when the app needs to recover information, first check the memcached. If it's there, use it, and do not bother the database. If it's not in the memcached (it expired, r has not been set yet) look in the db for the data, and refer back to item 1) above.
Here is a ruby example that very flexibly stores just about any MySQL query.
not printed here are the mysql handle definition (standard) and the code for the dbupdate! method.
The dbupdate! method is a wrapper for the mysql select and update syntax, so that if a record exists, it will be updated, and if it does not exist, it will be added.
Ok - here are the three total memcached methods:
### Simple memcached methods.
def mchandle(namespace="pimp")
begin
MemCache.new(MEMCACHEDHOSTS, :namespace => namespace, :multithread => true)
rescue MemCache::MemCacheError => theerror
$stderr.print("Error creating memcached handle for this thread #{theerror}")
return nil
end
end
def mcput(mchandle,key,value)
begin
mchandle.set(key,value,@expiry) #10 sec timeout
rescue MemCache::MemCacheError => theerror
$stderr.print("Error in Persist::mcput #{theerror}")
raise "Persist:mcput: #{theerror}"
end
end
def mcget(mchandle,key)
begin
mchandle[key]
rescue MemCache::MemCacheError => theerror
$stderr.print("Error in Persist::mcget #{theerror}")
raise "Persist:mcget: #{theerror}"
end
end
So - all that is required is a handle to the memcached, and a get method and a put method.
Now here are the two corresponding Mysql methods: wrapget and wrapput.
def wrapget(mchandle,dbhandle,table,what,where)
# totally generic, no set db or ms.
# On aDB hit, convert the result to an array of rows, each row is a hash of columns, in string format, to match how db results are
# cached in the wrapput, like [{val=>"one",uri=>"two"}] and insert in the memcached.
mckey = (where.gsub(/AND|and/,'') +'_'+ what +'_'+ table ).gsub(/[^a-zA-Z0-9_]/,'')
#$stderr.print("\n in wrapget, the mckey is #{mckey} (where_what_table)")
result = mcget(mchandle,mckey)
if(result != nil)
$stderr.print("\ncache HIT: #{mckey}")
else
$stderr.print("\ncache MISS: #{mckey}")
result = dbselect(dbhandle,table,what,where)
if(result.num_rows == 0)
$stderr.print("\ndb MISS: #{mckey}")
else #cache the hit for next time.
$stderr.print("\ndb HIT: #{mckey}")
cache_result = Array.new
result.each_hash{|x| cache_result.push(x)}
mcput(mchandle,mckey,cache_result)
result = cache_result #replace the result to pass out.
end
end
result #return result
end
def wrapput(mchandle,dbhandle,table,set,where)
# totally generic, no set db or ms.
# to make the mckey, strip the values from set, so that hval=123,hkey="shiz" gets condensed to hvalhkey, just for the purposes of setting the mc key.
# to memcached, insert an array of hash rows based on the sql, so "SET val= 'one', uri='two'" becomes [{val=>"one",uri=>"two"}]
mckey = (where.gsub(/AND|and/,'') +'_'+ set.gsub(/=.*?,|=.*$/,'') +'_'+ table).gsub(/[^a-zA-Z0-9_]/,'')
#$stderr.print("\n in wrapput, the mckey is #{mckey}")
dbupdate!(dbhandle,table,set,where)
set_hash = {}
set.split(',').each{|item| kv = item.split('='); set_hash.store(kv[0],kv[1])}
mcput(mchandle,mckey,[set_hash])
#$stderr.print("\n wrapput - here's the hash we are storing in the mc: #{[set_hash]}, it is class: #{[set_hash].class} ")
end
One interesting thing to note is that I convert both the insert/update and the select from a MYSQL::RESULT to an Array of hashes like this [{name=>"anthony"}{name=>"musetta"}] which mimics the structure of the mysql result set. This makes the cached result more consistent. And you could read the caches with nodes that have not included the 'mysql.rb' driver, and don't have the compiled mysql client.
Subscribe to:
Posts (Atom)